2023-07-21 11:15:51,864 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7 2023-07-21 11:15:51,881 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics timeout: 13 mins 2023-07-21 11:15:51,898 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 11:15:51,898 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09, deleteOnExit=true 2023-07-21 11:15:51,899 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 11:15:51,899 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/test.cache.data in system properties and HBase conf 2023-07-21 11:15:51,900 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 11:15:51,900 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir in system properties and HBase conf 2023-07-21 11:15:51,901 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 11:15:51,901 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 11:15:51,901 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 11:15:52,036 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-21 11:15:52,502 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 11:15:52,506 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 11:15:52,506 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 11:15:52,506 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 11:15:52,506 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 11:15:52,507 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 11:15:52,507 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 11:15:52,507 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 11:15:52,508 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 11:15:52,508 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 11:15:52,508 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/nfs.dump.dir in system properties and HBase conf 2023-07-21 11:15:52,508 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir in system properties and HBase conf 2023-07-21 11:15:52,509 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 11:15:52,509 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 11:15:52,509 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 11:15:53,092 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 11:15:53,095 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 11:15:53,396 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-21 11:15:53,659 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-21 11:15:53,690 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:15:53,739 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:15:53,775 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/Jetty_localhost_localdomain_39213_hdfs____.b3ym48/webapp 2023-07-21 11:15:53,914 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:39213 2023-07-21 11:15:53,927 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 11:15:53,927 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 11:15:54,578 WARN [Listener at localhost.localdomain/36511] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:15:54,664 WARN [Listener at localhost.localdomain/36511] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 11:15:54,685 WARN [Listener at localhost.localdomain/36511] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:15:54,693 INFO [Listener at localhost.localdomain/36511] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:15:54,701 INFO [Listener at localhost.localdomain/36511] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/Jetty_localhost_41377_datanode____.d01083/webapp 2023-07-21 11:15:54,808 INFO [Listener at localhost.localdomain/36511] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41377 2023-07-21 11:15:55,256 WARN [Listener at localhost.localdomain/37357] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:15:55,394 WARN [Listener at localhost.localdomain/37357] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 11:15:55,402 WARN [Listener at localhost.localdomain/37357] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:15:55,405 INFO [Listener at localhost.localdomain/37357] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:15:55,412 INFO [Listener at localhost.localdomain/37357] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/Jetty_localhost_43299_datanode____.a7r3ja/webapp 2023-07-21 11:15:55,525 INFO [Listener at localhost.localdomain/37357] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43299 2023-07-21 11:15:55,547 WARN [Listener at localhost.localdomain/41237] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:15:55,590 WARN [Listener at localhost.localdomain/41237] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 11:15:55,596 WARN [Listener at localhost.localdomain/41237] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 11:15:55,598 INFO [Listener at localhost.localdomain/41237] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 11:15:55,604 INFO [Listener at localhost.localdomain/41237] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/Jetty_localhost_39561_datanode____.fl7ekv/webapp 2023-07-21 11:15:55,732 INFO [Listener at localhost.localdomain/41237] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39561 2023-07-21 11:15:55,772 WARN [Listener at localhost.localdomain/33557] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 11:15:55,989 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x36fc5c8830dab08d: Processing first storage report for DS-b96b1104-46b1-4a71-a873-af9769219804 from datanode 359ae0fa-be87-41cd-9a97-293b91cb17e2 2023-07-21 11:15:55,990 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x36fc5c8830dab08d: from storage DS-b96b1104-46b1-4a71-a873-af9769219804 node DatanodeRegistration(127.0.0.1:33003, datanodeUuid=359ae0fa-be87-41cd-9a97-293b91cb17e2, infoPort=37043, infoSecurePort=0, ipcPort=37357, storageInfo=lv=-57;cid=testClusterID;nsid=1979738401;c=1689938153171), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-21 11:15:55,991 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe67e4318db86f724: Processing first storage report for DS-520c98cd-48f2-458b-87c2-acc7c5f40723 from datanode 7c1fb44b-3290-4700-b701-b83031f3b3d9 2023-07-21 11:15:55,991 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe67e4318db86f724: from storage DS-520c98cd-48f2-458b-87c2-acc7c5f40723 node DatanodeRegistration(127.0.0.1:36321, datanodeUuid=7c1fb44b-3290-4700-b701-b83031f3b3d9, infoPort=44939, infoSecurePort=0, ipcPort=33557, storageInfo=lv=-57;cid=testClusterID;nsid=1979738401;c=1689938153171), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:15:55,991 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x36fc5c8830dab08d: Processing first storage report for DS-da3279a4-5fc1-4bdf-b812-aaa4c64aaad3 from datanode 359ae0fa-be87-41cd-9a97-293b91cb17e2 2023-07-21 11:15:55,991 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x36fc5c8830dab08d: from storage DS-da3279a4-5fc1-4bdf-b812-aaa4c64aaad3 node DatanodeRegistration(127.0.0.1:33003, datanodeUuid=359ae0fa-be87-41cd-9a97-293b91cb17e2, infoPort=37043, infoSecurePort=0, ipcPort=37357, storageInfo=lv=-57;cid=testClusterID;nsid=1979738401;c=1689938153171), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:15:55,991 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe67e4318db86f724: Processing first storage report for DS-45f32383-11b8-4ca5-a4fb-b8c63c09e831 from datanode 7c1fb44b-3290-4700-b701-b83031f3b3d9 2023-07-21 11:15:55,992 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe67e4318db86f724: from storage DS-45f32383-11b8-4ca5-a4fb-b8c63c09e831 node DatanodeRegistration(127.0.0.1:36321, datanodeUuid=7c1fb44b-3290-4700-b701-b83031f3b3d9, infoPort=44939, infoSecurePort=0, ipcPort=33557, storageInfo=lv=-57;cid=testClusterID;nsid=1979738401;c=1689938153171), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:15:55,992 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xff1f5c02a0cb435c: Processing first storage report for DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1 from datanode 4e13056a-3c02-4d90-a700-907346e45ae0 2023-07-21 11:15:55,992 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xff1f5c02a0cb435c: from storage DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1 node DatanodeRegistration(127.0.0.1:44393, datanodeUuid=4e13056a-3c02-4d90-a700-907346e45ae0, infoPort=38051, infoSecurePort=0, ipcPort=41237, storageInfo=lv=-57;cid=testClusterID;nsid=1979738401;c=1689938153171), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 11:15:55,992 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xff1f5c02a0cb435c: Processing first storage report for DS-a259fedc-77f7-412e-beb9-a95a39dd2a88 from datanode 4e13056a-3c02-4d90-a700-907346e45ae0 2023-07-21 11:15:55,993 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xff1f5c02a0cb435c: from storage DS-a259fedc-77f7-412e-beb9-a95a39dd2a88 node DatanodeRegistration(127.0.0.1:44393, datanodeUuid=4e13056a-3c02-4d90-a700-907346e45ae0, infoPort=38051, infoSecurePort=0, ipcPort=41237, storageInfo=lv=-57;cid=testClusterID;nsid=1979738401;c=1689938153171), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 11:15:56,285 DEBUG [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7 2023-07-21 11:15:56,420 INFO [Listener at localhost.localdomain/33557] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/zookeeper_0, clientPort=61077, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 11:15:56,442 INFO [Listener at localhost.localdomain/33557] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61077 2023-07-21 11:15:56,455 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:15:56,459 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:15:56,904 INFO [Listener at localhost.localdomain/33557] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae with version=8 2023-07-21 11:15:56,905 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/hbase-staging 2023-07-21 11:15:56,916 DEBUG [Listener at localhost.localdomain/33557] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 11:15:56,916 DEBUG [Listener at localhost.localdomain/33557] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 11:15:56,916 DEBUG [Listener at localhost.localdomain/33557] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 11:15:56,917 DEBUG [Listener at localhost.localdomain/33557] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 11:15:57,335 INFO [Listener at localhost.localdomain/33557] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-21 11:15:57,983 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:15:58,030 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:15:58,031 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:15:58,031 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:15:58,031 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:15:58,032 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:15:58,213 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:15:58,291 DEBUG [Listener at localhost.localdomain/33557] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-21 11:15:58,391 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:41077 2023-07-21 11:15:58,408 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:15:58,412 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:15:58,447 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41077 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:15:58,492 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:410770x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:15:58,495 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41077-0x101879756880000 connected 2023-07-21 11:15:58,536 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:15:58,537 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:15:58,545 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:15:58,557 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41077 2023-07-21 11:15:58,557 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41077 2023-07-21 11:15:58,559 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41077 2023-07-21 11:15:58,559 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41077 2023-07-21 11:15:58,559 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41077 2023-07-21 11:15:58,599 INFO [Listener at localhost.localdomain/33557] log.Log(170): Logging initialized @7582ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-21 11:15:58,772 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:15:58,773 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:15:58,774 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:15:58,777 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 11:15:58,777 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:15:58,778 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:15:58,783 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:15:58,851 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 43969 2023-07-21 11:15:58,853 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:15:58,902 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:15:58,907 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4eea13c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:15:58,908 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:15:58,909 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@31f3f57b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:15:59,121 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:15:59,137 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:15:59,137 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:15:59,139 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 11:15:59,146 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:15:59,173 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@292c560c{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-43969-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6829333536351944493/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 11:15:59,185 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@296842bc{HTTP/1.1, (http/1.1)}{0.0.0.0:43969} 2023-07-21 11:15:59,186 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @8168ms 2023-07-21 11:15:59,189 INFO [Listener at localhost.localdomain/33557] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae, hbase.cluster.distributed=false 2023-07-21 11:15:59,263 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:15:59,264 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:15:59,264 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:15:59,264 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:15:59,265 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:15:59,265 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:15:59,270 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:15:59,275 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:40783 2023-07-21 11:15:59,277 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:15:59,283 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:15:59,284 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:15:59,286 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:15:59,288 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40783 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:15:59,291 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:407830x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:15:59,292 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:407830x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:15:59,294 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:407830x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:15:59,296 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40783-0x101879756880001 connected 2023-07-21 11:15:59,298 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:15:59,298 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40783 2023-07-21 11:15:59,299 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40783 2023-07-21 11:15:59,299 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40783 2023-07-21 11:15:59,300 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40783 2023-07-21 11:15:59,300 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40783 2023-07-21 11:15:59,302 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:15:59,302 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:15:59,303 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:15:59,304 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:15:59,304 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:15:59,304 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:15:59,305 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:15:59,307 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 37741 2023-07-21 11:15:59,307 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:15:59,313 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:15:59,313 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@64e08883{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:15:59,314 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:15:59,314 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@184daf7c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:15:59,425 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:15:59,427 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:15:59,427 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:15:59,427 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:15:59,428 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:15:59,432 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6d6a5bc{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-37741-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2020283283005954516/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:15:59,433 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@8f1e840{HTTP/1.1, (http/1.1)}{0.0.0.0:37741} 2023-07-21 11:15:59,433 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @8416ms 2023-07-21 11:15:59,445 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:15:59,445 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:15:59,446 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:15:59,446 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:15:59,446 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:15:59,447 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:15:59,447 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:15:59,449 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:39805 2023-07-21 11:15:59,450 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:15:59,451 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:15:59,452 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:15:59,454 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:15:59,455 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39805 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:15:59,459 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:398050x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:15:59,461 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:398050x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:15:59,462 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:398050x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:15:59,463 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39805-0x101879756880002 connected 2023-07-21 11:15:59,464 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:15:59,466 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39805 2023-07-21 11:15:59,466 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39805 2023-07-21 11:15:59,468 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39805 2023-07-21 11:15:59,469 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39805 2023-07-21 11:15:59,469 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39805 2023-07-21 11:15:59,473 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:15:59,473 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:15:59,473 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:15:59,474 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:15:59,474 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:15:59,474 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:15:59,475 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:15:59,476 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 41777 2023-07-21 11:15:59,476 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:15:59,479 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:15:59,479 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2c951266{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:15:59,480 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:15:59,480 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5d364a00{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:15:59,600 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:15:59,601 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:15:59,601 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:15:59,601 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:15:59,603 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:15:59,604 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3ff1fc24{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-41777-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6604868322309621138/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:15:59,605 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@1aae97ce{HTTP/1.1, (http/1.1)}{0.0.0.0:41777} 2023-07-21 11:15:59,606 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @8589ms 2023-07-21 11:15:59,621 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:15:59,621 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:15:59,622 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:15:59,622 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:15:59,622 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:15:59,622 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:15:59,622 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:15:59,624 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:34719 2023-07-21 11:15:59,625 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:15:59,626 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:15:59,628 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:15:59,629 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:15:59,630 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34719 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:15:59,633 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:347190x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:15:59,635 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34719-0x101879756880003 connected 2023-07-21 11:15:59,635 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:15:59,636 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:15:59,636 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:15:59,641 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34719 2023-07-21 11:15:59,641 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34719 2023-07-21 11:15:59,642 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34719 2023-07-21 11:15:59,643 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34719 2023-07-21 11:15:59,643 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34719 2023-07-21 11:15:59,646 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:15:59,647 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:15:59,647 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:15:59,647 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:15:59,647 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:15:59,648 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:15:59,648 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:15:59,649 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 37877 2023-07-21 11:15:59,649 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:15:59,654 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:15:59,654 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@392cca42{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:15:59,655 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:15:59,656 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6aac43d9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:15:59,759 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:15:59,760 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:15:59,760 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:15:59,761 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 11:15:59,762 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:15:59,763 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@57bde63a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-37877-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5437556490087306139/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:15:59,765 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@50ceb1f8{HTTP/1.1, (http/1.1)}{0.0.0.0:37877} 2023-07-21 11:15:59,765 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @8748ms 2023-07-21 11:15:59,774 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:15:59,779 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@24ed8efe{HTTP/1.1, (http/1.1)}{0.0.0.0:37247} 2023-07-21 11:15:59,779 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @8762ms 2023-07-21 11:15:59,780 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,41077,1689938157103 2023-07-21 11:15:59,790 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 11:15:59,791 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,41077,1689938157103 2023-07-21 11:15:59,808 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:15:59,808 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:15:59,808 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:15:59,808 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:15:59,809 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:15:59,810 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:15:59,811 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,41077,1689938157103 from backup master directory 2023-07-21 11:15:59,811 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:15:59,815 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,41077,1689938157103 2023-07-21 11:15:59,815 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 11:15:59,816 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:15:59,816 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,41077,1689938157103 2023-07-21 11:15:59,819 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-21 11:15:59,821 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-21 11:15:59,923 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/hbase.id with ID: 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:15:59,979 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:00,006 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:00,092 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1512fdf2 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:00,126 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@50eb7af9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:00,158 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:00,160 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 11:16:00,180 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-21 11:16:00,180 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-21 11:16:00,182 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:00,187 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:00,189 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:00,227 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store-tmp 2023-07-21 11:16:00,269 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:00,270 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 11:16:00,270 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:00,270 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:00,270 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 11:16:00,270 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:00,270 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:00,270 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:16:00,272 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,41077,1689938157103 2023-07-21 11:16:00,300 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C41077%2C1689938157103, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,41077,1689938157103, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/oldWALs, maxLogs=10 2023-07-21 11:16:00,358 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:00,358 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:00,358 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:00,369 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:00,446 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,41077,1689938157103/jenkins-hbase17.apache.org%2C41077%2C1689938157103.1689938160309 2023-07-21 11:16:00,447 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK]] 2023-07-21 11:16:00,449 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:00,449 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:00,455 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:00,457 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:00,559 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:00,578 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 11:16:00,624 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 11:16:00,644 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:00,650 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:00,652 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:00,678 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:00,684 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:00,685 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10803417600, jitterRate=0.006146669387817383}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:00,686 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:16:00,689 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 11:16:00,721 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 11:16:00,722 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 11:16:00,726 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 11:16:00,729 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-21 11:16:00,775 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 45 msec 2023-07-21 11:16:00,775 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 11:16:00,804 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 11:16:00,812 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 11:16:00,822 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 11:16:00,831 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 11:16:00,837 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 11:16:00,839 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:00,841 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 11:16:00,842 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 11:16:00,864 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 11:16:00,870 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:00,870 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:00,870 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:00,872 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:00,874 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:00,877 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,41077,1689938157103, sessionid=0x101879756880000, setting cluster-up flag (Was=false) 2023-07-21 11:16:00,899 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:00,909 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 11:16:00,912 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,41077,1689938157103 2023-07-21 11:16:00,923 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:00,936 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 11:16:00,938 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,41077,1689938157103 2023-07-21 11:16:00,941 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.hbase-snapshot/.tmp 2023-07-21 11:16:00,970 INFO [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:00,971 INFO [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:00,983 DEBUG [RS:0;jenkins-hbase17:40783] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:00,984 DEBUG [RS:1;jenkins-hbase17:39805] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:00,982 INFO [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:00,990 DEBUG [RS:2;jenkins-hbase17:34719] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:00,994 DEBUG [RS:0;jenkins-hbase17:40783] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:00,994 DEBUG [RS:0;jenkins-hbase17:40783] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:00,994 DEBUG [RS:1;jenkins-hbase17:39805] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:00,995 DEBUG [RS:1;jenkins-hbase17:39805] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:00,995 DEBUG [RS:2;jenkins-hbase17:34719] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:00,995 DEBUG [RS:2;jenkins-hbase17:34719] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:01,003 DEBUG [RS:1;jenkins-hbase17:39805] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:01,003 DEBUG [RS:0;jenkins-hbase17:40783] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:01,013 DEBUG [RS:1;jenkins-hbase17:39805] zookeeper.ReadOnlyZKClient(139): Connect 0x7aac64f5 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:01,004 DEBUG [RS:2;jenkins-hbase17:34719] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:01,021 DEBUG [RS:0;jenkins-hbase17:40783] zookeeper.ReadOnlyZKClient(139): Connect 0x294ce4c7 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:01,035 DEBUG [RS:2;jenkins-hbase17:34719] zookeeper.ReadOnlyZKClient(139): Connect 0x3cab6281 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:01,090 DEBUG [RS:1;jenkins-hbase17:39805] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3faef5c4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:01,093 DEBUG [RS:1;jenkins-hbase17:39805] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f56ed93, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:01,112 DEBUG [RS:0;jenkins-hbase17:40783] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49ca3ddb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:01,113 DEBUG [RS:0;jenkins-hbase17:40783] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62c27020, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:01,134 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 11:16:01,138 DEBUG [RS:2;jenkins-hbase17:34719] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@326c9b64, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:01,139 DEBUG [RS:2;jenkins-hbase17:34719] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@365309f1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:01,151 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:39805 2023-07-21 11:16:01,154 DEBUG [RS:2;jenkins-hbase17:34719] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:34719 2023-07-21 11:16:01,156 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 11:16:01,159 INFO [RS:1;jenkins-hbase17:39805] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:01,192 INFO [RS:1;jenkins-hbase17:39805] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:01,191 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 11:16:01,192 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:01,176 DEBUG [RS:0;jenkins-hbase17:40783] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:40783 2023-07-21 11:16:01,159 INFO [RS:2;jenkins-hbase17:34719] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:01,194 INFO [RS:2;jenkins-hbase17:34719] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:01,194 INFO [RS:0;jenkins-hbase17:40783] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:01,194 INFO [RS:0;jenkins-hbase17:40783] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:01,193 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 11:16:01,192 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:01,194 DEBUG [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:01,194 DEBUG [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:01,199 INFO [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,41077,1689938157103 with isa=jenkins-hbase17.apache.org/136.243.18.41:39805, startcode=1689938159444 2023-07-21 11:16:01,200 INFO [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,41077,1689938157103 with isa=jenkins-hbase17.apache.org/136.243.18.41:34719, startcode=1689938159621 2023-07-21 11:16:01,209 INFO [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,41077,1689938157103 with isa=jenkins-hbase17.apache.org/136.243.18.41:40783, startcode=1689938159262 2023-07-21 11:16:01,241 DEBUG [RS:2;jenkins-hbase17:34719] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:01,247 DEBUG [RS:1;jenkins-hbase17:39805] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:01,247 DEBUG [RS:0;jenkins-hbase17:40783] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:01,393 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55831, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:01,394 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:42823, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:01,395 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:56003, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:01,411 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:01,427 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 11:16:01,436 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:01,437 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:01,456 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 11:16:01,456 WARN [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 11:16:01,463 DEBUG [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 11:16:01,463 DEBUG [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 11:16:01,463 WARN [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 11:16:01,463 WARN [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 11:16:01,491 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 11:16:01,497 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 11:16:01,498 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 11:16:01,498 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 11:16:01,500 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:16:01,500 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:16:01,500 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:16:01,501 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:16:01,501 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-21 11:16:01,501 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,501 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:01,501 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,515 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689938191515 2023-07-21 11:16:01,519 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 11:16:01,521 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 11:16:01,523 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 11:16:01,525 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 11:16:01,526 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:01,535 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 11:16:01,536 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 11:16:01,536 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 11:16:01,536 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 11:16:01,538 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:01,544 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 11:16:01,547 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 11:16:01,547 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 11:16:01,565 INFO [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,41077,1689938157103 with isa=jenkins-hbase17.apache.org/136.243.18.41:40783, startcode=1689938159262 2023-07-21 11:16:01,565 INFO [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,41077,1689938157103 with isa=jenkins-hbase17.apache.org/136.243.18.41:34719, startcode=1689938159621 2023-07-21 11:16:01,567 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:01,569 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:01,570 INFO [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,41077,1689938157103 with isa=jenkins-hbase17.apache.org/136.243.18.41:39805, startcode=1689938159444 2023-07-21 11:16:01,571 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:01,571 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 11:16:01,572 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 11:16:01,574 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938161574,5,FailOnTimeoutGroup] 2023-07-21 11:16:01,575 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938161574,5,FailOnTimeoutGroup] 2023-07-21 11:16:01,575 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:01,575 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 11:16:01,578 DEBUG [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 11:16:01,578 WARN [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-21 11:16:01,579 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:01,579 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:01,581 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 11:16:01,581 WARN [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-21 11:16:01,581 DEBUG [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 11:16:01,581 WARN [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-21 11:16:01,679 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 11:16:01,681 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 11:16:01,681 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:01,736 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:01,741 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:16:01,746 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:01,747 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:16:01,749 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:01,749 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:16:01,760 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:01,761 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:16:01,762 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:01,763 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:16:01,769 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:01,770 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:16:01,771 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:01,773 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:01,779 INFO [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,41077,1689938157103 with isa=jenkins-hbase17.apache.org/136.243.18.41:34719, startcode=1689938159621 2023-07-21 11:16:01,781 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:01,791 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41077] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:01,794 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:01,790 INFO [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,41077,1689938157103 with isa=jenkins-hbase17.apache.org/136.243.18.41:39805, startcode=1689938159444 2023-07-21 11:16:01,790 INFO [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,41077,1689938157103 with isa=jenkins-hbase17.apache.org/136.243.18.41:40783, startcode=1689938159262 2023-07-21 11:16:01,795 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 11:16:01,800 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:16:01,805 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:16:01,819 DEBUG [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:01,819 DEBUG [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:01,819 DEBUG [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43969 2023-07-21 11:16:01,825 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41077] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:01,827 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:01,827 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 11:16:01,829 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41077] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:01,829 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:01,829 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 11:16:01,833 DEBUG [RS:2;jenkins-hbase17:34719] zookeeper.ZKUtil(162): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:01,834 WARN [RS:2;jenkins-hbase17:34719] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:01,834 INFO [RS:2;jenkins-hbase17:34719] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:01,835 DEBUG [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:01,836 DEBUG [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:01,836 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:01,836 DEBUG [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:01,836 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:01,837 DEBUG [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43969 2023-07-21 11:16:01,837 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43969 2023-07-21 11:16:01,844 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:01,844 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:01,847 DEBUG [RS:0;jenkins-hbase17:40783] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:01,848 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9800541600, jitterRate=-0.08725343644618988}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:16:01,848 DEBUG [RS:1;jenkins-hbase17:39805] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:01,850 WARN [RS:1;jenkins-hbase17:39805] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:01,849 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:16:01,855 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:16:01,855 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:16:01,855 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:16:01,855 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:16:01,855 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:16:01,847 WARN [RS:0;jenkins-hbase17:40783] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:01,856 INFO [RS:0;jenkins-hbase17:40783] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:01,857 DEBUG [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:01,860 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:16:01,863 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:16:01,850 INFO [RS:1;jenkins-hbase17:39805] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:01,863 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:01,864 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,39805,1689938159444] 2023-07-21 11:16:01,864 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,34719,1689938159621] 2023-07-21 11:16:01,864 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,40783,1689938159262] 2023-07-21 11:16:01,869 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 11:16:01,869 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 11:16:01,879 DEBUG [RS:2;jenkins-hbase17:34719] zookeeper.ZKUtil(162): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:01,879 DEBUG [RS:0;jenkins-hbase17:40783] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:01,880 DEBUG [RS:0;jenkins-hbase17:40783] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:01,880 DEBUG [RS:2;jenkins-hbase17:34719] zookeeper.ZKUtil(162): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:01,880 DEBUG [RS:0;jenkins-hbase17:40783] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:01,881 DEBUG [RS:2;jenkins-hbase17:34719] zookeeper.ZKUtil(162): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:01,883 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 11:16:01,894 DEBUG [RS:0;jenkins-hbase17:40783] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:01,896 DEBUG [RS:2;jenkins-hbase17:34719] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:01,903 DEBUG [RS:1;jenkins-hbase17:39805] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:01,904 DEBUG [RS:1;jenkins-hbase17:39805] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:01,905 DEBUG [RS:1;jenkins-hbase17:39805] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:01,906 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:01,909 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 11:16:01,913 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 11:16:01,917 INFO [RS:1;jenkins-hbase17:39805] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:01,919 INFO [RS:0;jenkins-hbase17:40783] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:01,920 INFO [RS:2;jenkins-hbase17:34719] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:01,951 INFO [RS:1;jenkins-hbase17:39805] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:01,969 INFO [RS:0;jenkins-hbase17:40783] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:01,953 INFO [RS:2;jenkins-hbase17:34719] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:01,974 INFO [RS:0;jenkins-hbase17:40783] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:01,974 INFO [RS:0;jenkins-hbase17:40783] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:01,974 INFO [RS:1;jenkins-hbase17:39805] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:01,975 INFO [RS:1;jenkins-hbase17:39805] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:01,976 INFO [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:01,976 INFO [RS:2;jenkins-hbase17:34719] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:01,976 INFO [RS:2;jenkins-hbase17:34719] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:01,976 INFO [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:01,985 INFO [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:01,994 INFO [RS:0;jenkins-hbase17:40783] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:01,994 INFO [RS:2;jenkins-hbase17:34719] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:01,994 INFO [RS:1;jenkins-hbase17:39805] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:01,995 DEBUG [RS:0;jenkins-hbase17:40783] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,995 DEBUG [RS:2;jenkins-hbase17:34719] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,995 DEBUG [RS:1;jenkins-hbase17:39805] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,996 DEBUG [RS:2;jenkins-hbase17:34719] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,996 DEBUG [RS:0;jenkins-hbase17:40783] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,996 DEBUG [RS:2;jenkins-hbase17:34719] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,996 DEBUG [RS:0;jenkins-hbase17:40783] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,996 DEBUG [RS:2;jenkins-hbase17:34719] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,996 DEBUG [RS:0;jenkins-hbase17:40783] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,996 DEBUG [RS:2;jenkins-hbase17:34719] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,996 DEBUG [RS:0;jenkins-hbase17:40783] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,996 DEBUG [RS:2;jenkins-hbase17:34719] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:01,996 DEBUG [RS:1;jenkins-hbase17:39805] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,996 DEBUG [RS:2;jenkins-hbase17:34719] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,996 DEBUG [RS:0;jenkins-hbase17:40783] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:01,996 DEBUG [RS:2;jenkins-hbase17:34719] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,996 DEBUG [RS:1;jenkins-hbase17:39805] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,997 DEBUG [RS:2;jenkins-hbase17:34719] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,997 DEBUG [RS:1;jenkins-hbase17:39805] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,997 DEBUG [RS:2;jenkins-hbase17:34719] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,997 DEBUG [RS:1;jenkins-hbase17:39805] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,997 DEBUG [RS:1;jenkins-hbase17:39805] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:01,997 DEBUG [RS:1;jenkins-hbase17:39805] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,997 DEBUG [RS:1;jenkins-hbase17:39805] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,997 DEBUG [RS:1;jenkins-hbase17:39805] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,997 DEBUG [RS:0;jenkins-hbase17:40783] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,997 DEBUG [RS:1;jenkins-hbase17:39805] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,998 DEBUG [RS:0;jenkins-hbase17:40783] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,998 DEBUG [RS:0;jenkins-hbase17:40783] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:01,998 DEBUG [RS:0;jenkins-hbase17:40783] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:02,016 INFO [RS:0;jenkins-hbase17:40783] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,016 INFO [RS:0;jenkins-hbase17:40783] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,017 INFO [RS:0;jenkins-hbase17:40783] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,020 INFO [RS:2;jenkins-hbase17:34719] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,020 INFO [RS:2;jenkins-hbase17:34719] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,021 INFO [RS:2;jenkins-hbase17:34719] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,025 INFO [RS:1;jenkins-hbase17:39805] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,026 INFO [RS:1;jenkins-hbase17:39805] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,026 INFO [RS:1;jenkins-hbase17:39805] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,045 INFO [RS:1;jenkins-hbase17:39805] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:02,045 INFO [RS:0;jenkins-hbase17:40783] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:02,047 INFO [RS:2;jenkins-hbase17:34719] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:02,050 INFO [RS:2;jenkins-hbase17:34719] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,34719,1689938159621-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,052 INFO [RS:0;jenkins-hbase17:40783] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,40783,1689938159262-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,073 INFO [RS:1;jenkins-hbase17:39805] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39805,1689938159444-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,075 DEBUG [jenkins-hbase17:41077] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 11:16:02,092 INFO [RS:0;jenkins-hbase17:40783] regionserver.Replication(203): jenkins-hbase17.apache.org,40783,1689938159262 started 2023-07-21 11:16:02,092 INFO [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,40783,1689938159262, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:40783, sessionid=0x101879756880001 2023-07-21 11:16:02,093 DEBUG [RS:0;jenkins-hbase17:40783] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:02,093 DEBUG [RS:0;jenkins-hbase17:40783] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:02,093 DEBUG [RS:0;jenkins-hbase17:40783] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,40783,1689938159262' 2023-07-21 11:16:02,093 DEBUG [RS:0;jenkins-hbase17:40783] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:02,095 DEBUG [RS:0;jenkins-hbase17:40783] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:02,096 DEBUG [RS:0;jenkins-hbase17:40783] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:02,096 DEBUG [RS:0;jenkins-hbase17:40783] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:02,096 DEBUG [RS:0;jenkins-hbase17:40783] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:02,096 DEBUG [RS:0;jenkins-hbase17:40783] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,40783,1689938159262' 2023-07-21 11:16:02,097 DEBUG [RS:0;jenkins-hbase17:40783] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:02,098 DEBUG [RS:0;jenkins-hbase17:40783] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:02,099 DEBUG [RS:0;jenkins-hbase17:40783] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:02,099 INFO [RS:0;jenkins-hbase17:40783] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:16:02,100 INFO [RS:0;jenkins-hbase17:40783] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:16:02,103 DEBUG [jenkins-hbase17:41077] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:02,105 DEBUG [jenkins-hbase17:41077] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:02,106 DEBUG [jenkins-hbase17:41077] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:02,106 DEBUG [jenkins-hbase17:41077] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:02,106 DEBUG [jenkins-hbase17:41077] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:02,106 INFO [RS:2;jenkins-hbase17:34719] regionserver.Replication(203): jenkins-hbase17.apache.org,34719,1689938159621 started 2023-07-21 11:16:02,107 INFO [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,34719,1689938159621, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:34719, sessionid=0x101879756880003 2023-07-21 11:16:02,107 DEBUG [RS:2;jenkins-hbase17:34719] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:02,107 DEBUG [RS:2;jenkins-hbase17:34719] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:02,118 INFO [RS:1;jenkins-hbase17:39805] regionserver.Replication(203): jenkins-hbase17.apache.org,39805,1689938159444 started 2023-07-21 11:16:02,119 INFO [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,39805,1689938159444, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:39805, sessionid=0x101879756880002 2023-07-21 11:16:02,119 DEBUG [RS:1;jenkins-hbase17:39805] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:02,119 DEBUG [RS:1;jenkins-hbase17:39805] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:02,118 DEBUG [RS:2;jenkins-hbase17:34719] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,34719,1689938159621' 2023-07-21 11:16:02,120 DEBUG [RS:2;jenkins-hbase17:34719] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:02,119 DEBUG [RS:1;jenkins-hbase17:39805] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,39805,1689938159444' 2023-07-21 11:16:02,120 DEBUG [RS:1;jenkins-hbase17:39805] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:02,121 DEBUG [RS:1;jenkins-hbase17:39805] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:02,121 DEBUG [RS:1;jenkins-hbase17:39805] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:02,121 DEBUG [RS:1;jenkins-hbase17:39805] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:02,122 DEBUG [RS:1;jenkins-hbase17:39805] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:02,122 DEBUG [RS:1;jenkins-hbase17:39805] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,39805,1689938159444' 2023-07-21 11:16:02,122 DEBUG [RS:1;jenkins-hbase17:39805] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:02,124 DEBUG [RS:1;jenkins-hbase17:39805] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:02,126 DEBUG [RS:1;jenkins-hbase17:39805] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:02,126 INFO [RS:1;jenkins-hbase17:39805] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:16:02,127 INFO [RS:1;jenkins-hbase17:39805] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:16:02,128 DEBUG [RS:2;jenkins-hbase17:34719] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:02,129 DEBUG [RS:2;jenkins-hbase17:34719] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:02,129 DEBUG [RS:2;jenkins-hbase17:34719] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:02,129 DEBUG [RS:2;jenkins-hbase17:34719] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:02,129 DEBUG [RS:2;jenkins-hbase17:34719] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,34719,1689938159621' 2023-07-21 11:16:02,129 DEBUG [RS:2;jenkins-hbase17:34719] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:02,129 DEBUG [RS:2;jenkins-hbase17:34719] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:02,130 DEBUG [RS:2;jenkins-hbase17:34719] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:02,130 INFO [RS:2;jenkins-hbase17:34719] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:16:02,130 INFO [RS:2;jenkins-hbase17:34719] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:16:02,131 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,34719,1689938159621, state=OPENING 2023-07-21 11:16:02,139 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 11:16:02,140 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:02,144 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:02,148 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34719,1689938159621}] 2023-07-21 11:16:02,237 INFO [RS:0;jenkins-hbase17:40783] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C40783%2C1689938159262, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40783,1689938159262, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:02,243 INFO [RS:1;jenkins-hbase17:39805] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C39805%2C1689938159444, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,39805,1689938159444, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:02,249 INFO [RS:2;jenkins-hbase17:34719] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C34719%2C1689938159621, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34719,1689938159621, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:02,328 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:02,332 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:02,334 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:02,335 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:02,338 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:02,338 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:02,356 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:02,370 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:02,371 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:02,394 INFO [RS:2;jenkins-hbase17:34719] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34719,1689938159621/jenkins-hbase17.apache.org%2C34719%2C1689938159621.1689938162255 2023-07-21 11:16:02,397 INFO [RS:1;jenkins-hbase17:39805] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,39805,1689938159444/jenkins-hbase17.apache.org%2C39805%2C1689938159444.1689938162261 2023-07-21 11:16:02,400 DEBUG [RS:2;jenkins-hbase17:34719] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK]] 2023-07-21 11:16:02,412 DEBUG [RS:1;jenkins-hbase17:39805] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK]] 2023-07-21 11:16:02,416 INFO [RS:0;jenkins-hbase17:40783] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40783,1689938159262/jenkins-hbase17.apache.org%2C40783%2C1689938159262.1689938162268 2023-07-21 11:16:02,417 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:02,421 DEBUG [RS:0;jenkins-hbase17:40783] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK]] 2023-07-21 11:16:02,425 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:02,435 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:02,458 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 11:16:02,459 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:02,463 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C34719%2C1689938159621.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34719,1689938159621, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:02,492 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:02,499 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:02,499 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:02,513 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34719,1689938159621/jenkins-hbase17.apache.org%2C34719%2C1689938159621.meta.1689938162467.meta 2023-07-21 11:16:02,517 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK]] 2023-07-21 11:16:02,517 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:02,520 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:16:02,523 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 11:16:02,526 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 11:16:02,534 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 11:16:02,534 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:02,534 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 11:16:02,534 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 11:16:02,541 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:16:02,547 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:02,550 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:02,551 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:16:02,553 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:02,553 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:16:02,555 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:02,555 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:02,556 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:16:02,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:02,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:16:02,559 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:02,560 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:02,560 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:16:02,562 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:02,564 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:02,569 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:02,576 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:16:02,584 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:16:02,586 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11464406560, jitterRate=0.06770606338977814}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:16:02,587 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:16:02,600 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689938162402 2023-07-21 11:16:02,627 WARN [ReadOnlyZKClient-127.0.0.1:61077@0x1512fdf2] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 11:16:02,634 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,34719,1689938159621, state=OPEN 2023-07-21 11:16:02,637 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 11:16:02,638 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 11:16:02,639 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:02,639 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:02,657 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 11:16:02,657 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34719,1689938159621 in 491 msec 2023-07-21 11:16:02,663 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,41077,1689938157103] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:02,676 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34342, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:02,683 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 11:16:02,684 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 776 msec 2023-07-21 11:16:02,694 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.4850 sec 2023-07-21 11:16:02,694 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689938162694, completionTime=-1 2023-07-21 11:16:02,695 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 11:16:02,695 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 11:16:02,706 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,41077,1689938157103] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:02,719 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,41077,1689938157103] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 11:16:02,721 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 11:16:02,766 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 11:16:02,767 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689938222767 2023-07-21 11:16:02,767 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689938282767 2023-07-21 11:16:02,767 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 72 msec 2023-07-21 11:16:02,812 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,41077,1689938157103-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,812 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,41077,1689938157103-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,812 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,41077,1689938157103-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,813 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:16:02,815 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:41077, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,816 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:02,822 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:16:02,827 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 11:16:02,847 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:02,850 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb empty. 2023-07-21 11:16:02,851 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:02,851 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 11:16:02,855 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 11:16:02,856 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:02,866 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 11:16:02,871 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:16:02,877 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:16:02,925 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:02,932 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d empty. 2023-07-21 11:16:02,939 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:02,939 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 11:16:03,035 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 11:16:03,040 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 11:16:03,041 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2782e41606006289532e239f665ea4eb, NAME => 'hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:03,042 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2bd94f497343684e2f5a451c6e430d4d, NAME => 'hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:03,247 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:03,247 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 2bd94f497343684e2f5a451c6e430d4d, disabling compactions & flushes 2023-07-21 11:16:03,247 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:03,247 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:03,247 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. after waiting 0 ms 2023-07-21 11:16:03,247 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:03,247 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:03,247 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 2bd94f497343684e2f5a451c6e430d4d: 2023-07-21 11:16:03,254 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:03,255 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 2782e41606006289532e239f665ea4eb, disabling compactions & flushes 2023-07-21 11:16:03,255 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:03,255 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:03,255 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. after waiting 0 ms 2023-07-21 11:16:03,255 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:03,255 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:03,255 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:03,257 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:16:03,261 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:16:03,279 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938163262"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938163262"}]},"ts":"1689938163262"} 2023-07-21 11:16:03,284 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938163260"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938163260"}]},"ts":"1689938163260"} 2023-07-21 11:16:03,321 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:16:03,329 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:16:03,335 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938163329"}]},"ts":"1689938163329"} 2023-07-21 11:16:03,336 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:16:03,338 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:16:03,340 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938163340"}]},"ts":"1689938163340"} 2023-07-21 11:16:03,341 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 11:16:03,362 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 11:16:03,364 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:03,364 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:03,365 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:03,365 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:03,365 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:03,372 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:03,372 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:03,372 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:03,372 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:03,373 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:03,375 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, ASSIGN}] 2023-07-21 11:16:03,375 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, ASSIGN}] 2023-07-21 11:16:03,386 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, ASSIGN 2023-07-21 11:16:03,393 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,40783,1689938159262; forceNewPlan=false, retain=false 2023-07-21 11:16:03,396 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, ASSIGN 2023-07-21 11:16:03,400 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,34719,1689938159621; forceNewPlan=false, retain=false 2023-07-21 11:16:03,402 INFO [jenkins-hbase17:41077] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-21 11:16:03,406 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=2bd94f497343684e2f5a451c6e430d4d, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:03,406 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938163405"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938163405"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938163405"}]},"ts":"1689938163405"} 2023-07-21 11:16:03,413 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:03,413 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938163413"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938163413"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938163413"}]},"ts":"1689938163413"} 2023-07-21 11:16:03,421 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 2bd94f497343684e2f5a451c6e430d4d, server=jenkins-hbase17.apache.org,40783,1689938159262}] 2023-07-21 11:16:03,426 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,34719,1689938159621}] 2023-07-21 11:16:03,588 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:03,589 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:03,594 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:48632, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:03,599 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:03,599 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2782e41606006289532e239f665ea4eb, NAME => 'hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:03,599 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:16:03,599 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. service=MultiRowMutationService 2023-07-21 11:16:03,601 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 11:16:03,604 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:03,604 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:03,604 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:03,605 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:03,608 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:03,609 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2bd94f497343684e2f5a451c6e430d4d, NAME => 'hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:03,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:03,611 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:03,611 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:03,611 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:03,616 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:03,620 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m 2023-07-21 11:16:03,620 INFO [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:03,620 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m 2023-07-21 11:16:03,621 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2782e41606006289532e239f665ea4eb columnFamilyName m 2023-07-21 11:16:03,622 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(310): Store=2782e41606006289532e239f665ea4eb/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:03,623 DEBUG [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info 2023-07-21 11:16:03,624 DEBUG [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info 2023-07-21 11:16:03,625 INFO [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2bd94f497343684e2f5a451c6e430d4d columnFamilyName info 2023-07-21 11:16:03,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:03,626 INFO [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] regionserver.HStore(310): Store=2bd94f497343684e2f5a451c6e430d4d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:03,628 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:03,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:03,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:03,633 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:03,636 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:03,638 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:03,641 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2782e41606006289532e239f665ea4eb; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6c7f7078, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:03,642 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:03,642 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:03,644 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb., pid=9, masterSystemTime=1689938163588 2023-07-21 11:16:03,644 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2bd94f497343684e2f5a451c6e430d4d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11169344320, jitterRate=0.040226250886917114}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:03,644 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2bd94f497343684e2f5a451c6e430d4d: 2023-07-21 11:16:03,648 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d., pid=8, masterSystemTime=1689938163588 2023-07-21 11:16:03,651 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:03,651 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:03,653 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:03,653 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:03,654 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938163652"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938163652"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938163652"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938163652"}]},"ts":"1689938163652"} 2023-07-21 11:16:03,654 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:03,657 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=2bd94f497343684e2f5a451c6e430d4d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:03,658 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938163656"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938163656"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938163656"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938163656"}]},"ts":"1689938163656"} 2023-07-21 11:16:03,675 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-21 11:16:03,675 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,34719,1689938159621 in 238 msec 2023-07-21 11:16:03,678 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-21 11:16:03,681 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 2bd94f497343684e2f5a451c6e430d4d, server=jenkins-hbase17.apache.org,40783,1689938159262 in 249 msec 2023-07-21 11:16:03,702 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-21 11:16:03,704 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, ASSIGN in 301 msec 2023-07-21 11:16:03,705 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-21 11:16:03,706 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, ASSIGN in 309 msec 2023-07-21 11:16:03,710 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:16:03,711 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938163711"}]},"ts":"1689938163711"} 2023-07-21 11:16:03,714 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:16:03,715 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938163715"}]},"ts":"1689938163715"} 2023-07-21 11:16:03,720 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 11:16:03,722 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 11:16:03,730 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:16:03,730 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:16:03,735 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 1.0230 sec 2023-07-21 11:16:03,753 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 874 msec 2023-07-21 11:16:03,812 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 11:16:03,816 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:16:03,816 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:03,861 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:03,892 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:48644, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:03,908 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 11:16:03,909 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 11:16:03,912 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 11:16:03,976 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:16:04,002 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 91 msec 2023-07-21 11:16:04,036 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 11:16:04,078 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:04,078 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:04,089 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:16:04,097 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 11:16:04,111 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 11:16:04,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 79 msec 2023-07-21 11:16:04,156 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 11:16:04,178 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 11:16:04,178 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 4.362sec 2023-07-21 11:16:04,181 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 11:16:04,183 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 11:16:04,183 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 11:16:04,192 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,41077,1689938157103-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 11:16:04,193 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,41077,1689938157103-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 11:16:04,217 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(139): Connect 0x45869290 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:04,258 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 11:16:04,294 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1967f81c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:04,330 DEBUG [hconnection-0x2b8fd83-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:04,388 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34358, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:04,405 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,41077,1689938157103 2023-07-21 11:16:04,407 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:04,446 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 11:16:04,473 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:49392, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 11:16:04,536 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 11:16:04,536 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:04,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=false 2023-07-21 11:16:04,569 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(139): Connect 0x63a16738 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:04,632 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@387ffa87, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:04,633 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:04,662 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:04,688 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10187975688000a connected 2023-07-21 11:16:04,768 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=420, OpenFileDescriptor=692, MaxFileDescriptor=60000, SystemLoadAverage=855, ProcessCount=186, AvailableMemoryMB=2722 2023-07-21 11:16:04,771 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(132): testClearNotProcessedDeadServer 2023-07-21 11:16:04,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:04,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:04,908 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 11:16:04,928 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:16:04,929 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:04,929 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:04,929 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:16:04,930 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:04,930 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:16:04,930 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:16:04,950 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:37137 2023-07-21 11:16:04,951 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:16:04,974 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:16:04,977 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:05,006 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:05,016 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37137 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:05,032 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:371370x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:05,035 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(162): regionserver:371370x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:16:05,036 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(162): regionserver:371370x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 11:16:05,044 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37137-0x10187975688000b connected 2023-07-21 11:16:05,045 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:16:05,068 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37137 2023-07-21 11:16:05,072 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37137 2023-07-21 11:16:05,096 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37137 2023-07-21 11:16:05,100 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37137 2023-07-21 11:16:05,106 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37137 2023-07-21 11:16:05,109 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:16:05,109 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:16:05,109 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:16:05,110 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:16:05,110 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:16:05,110 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:16:05,110 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:16:05,111 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 46279 2023-07-21 11:16:05,111 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:05,141 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:05,141 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e4aece7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:16:05,142 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:05,142 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@16316a5d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:16:05,295 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:16:05,297 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:16:05,297 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:16:05,297 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:16:05,301 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:05,303 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6487d5f1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-46279-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2462065874532496517/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:05,313 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@34763ecd{HTTP/1.1, (http/1.1)}{0.0.0.0:46279} 2023-07-21 11:16:05,314 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @14296ms 2023-07-21 11:16:05,325 INFO [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:05,326 DEBUG [RS:3;jenkins-hbase17:37137] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:05,328 DEBUG [RS:3;jenkins-hbase17:37137] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:05,328 DEBUG [RS:3;jenkins-hbase17:37137] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:05,330 DEBUG [RS:3;jenkins-hbase17:37137] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:05,336 DEBUG [RS:3;jenkins-hbase17:37137] zookeeper.ReadOnlyZKClient(139): Connect 0x0a330210 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:05,380 DEBUG [RS:3;jenkins-hbase17:37137] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4f4ed26, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:05,380 DEBUG [RS:3;jenkins-hbase17:37137] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cb6960, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:05,393 DEBUG [RS:3;jenkins-hbase17:37137] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase17:37137 2023-07-21 11:16:05,393 INFO [RS:3;jenkins-hbase17:37137] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:05,393 INFO [RS:3;jenkins-hbase17:37137] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:05,393 DEBUG [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:05,394 INFO [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,41077,1689938157103 with isa=jenkins-hbase17.apache.org/136.243.18.41:37137, startcode=1689938164928 2023-07-21 11:16:05,395 DEBUG [RS:3;jenkins-hbase17:37137] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:05,417 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:51181, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:05,417 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41077] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:05,418 DEBUG [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:05,418 DEBUG [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:05,418 DEBUG [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43969 2023-07-21 11:16:05,420 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:05,426 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:05,426 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:05,433 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:05,434 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:05,441 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:05,441 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,37137,1689938164928] 2023-07-21 11:16:05,445 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:05,445 DEBUG [RS:3;jenkins-hbase17:37137] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:05,445 WARN [RS:3;jenkins-hbase17:37137] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:05,445 INFO [RS:3;jenkins-hbase17:37137] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:05,445 DEBUG [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:05,449 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 11:16:05,449 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:05,452 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:05,455 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:05,466 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:05,466 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 11:16:05,466 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:05,468 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:05,469 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:05,470 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:05,473 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:05,473 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:05,474 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:05,474 DEBUG [RS:3;jenkins-hbase17:37137] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:05,479 DEBUG [RS:3;jenkins-hbase17:37137] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:05,479 DEBUG [RS:3;jenkins-hbase17:37137] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:05,480 DEBUG [RS:3;jenkins-hbase17:37137] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:05,482 DEBUG [RS:3;jenkins-hbase17:37137] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:05,482 INFO [RS:3;jenkins-hbase17:37137] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:05,502 INFO [RS:3;jenkins-hbase17:37137] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:05,504 INFO [RS:3;jenkins-hbase17:37137] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:05,504 INFO [RS:3;jenkins-hbase17:37137] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:05,508 INFO [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:05,522 INFO [RS:3;jenkins-hbase17:37137] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:05,522 DEBUG [RS:3;jenkins-hbase17:37137] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:05,522 DEBUG [RS:3;jenkins-hbase17:37137] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:05,522 DEBUG [RS:3;jenkins-hbase17:37137] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:05,523 DEBUG [RS:3;jenkins-hbase17:37137] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:05,523 DEBUG [RS:3;jenkins-hbase17:37137] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:05,523 DEBUG [RS:3;jenkins-hbase17:37137] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:05,523 DEBUG [RS:3;jenkins-hbase17:37137] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:05,523 DEBUG [RS:3;jenkins-hbase17:37137] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:05,523 DEBUG [RS:3;jenkins-hbase17:37137] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:05,523 DEBUG [RS:3;jenkins-hbase17:37137] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:05,525 INFO [RS:3;jenkins-hbase17:37137] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:05,525 INFO [RS:3;jenkins-hbase17:37137] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:05,525 INFO [RS:3;jenkins-hbase17:37137] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:05,563 INFO [RS:3;jenkins-hbase17:37137] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:05,563 INFO [RS:3;jenkins-hbase17:37137] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,37137,1689938164928-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:05,644 INFO [RS:3;jenkins-hbase17:37137] regionserver.Replication(203): jenkins-hbase17.apache.org,37137,1689938164928 started 2023-07-21 11:16:05,644 INFO [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,37137,1689938164928, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:37137, sessionid=0x10187975688000b 2023-07-21 11:16:05,650 DEBUG [RS:3;jenkins-hbase17:37137] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:05,650 DEBUG [RS:3;jenkins-hbase17:37137] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:05,650 DEBUG [RS:3;jenkins-hbase17:37137] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,37137,1689938164928' 2023-07-21 11:16:05,650 DEBUG [RS:3;jenkins-hbase17:37137] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:05,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:05,661 DEBUG [RS:3;jenkins-hbase17:37137] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:05,662 DEBUG [RS:3;jenkins-hbase17:37137] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:05,662 DEBUG [RS:3;jenkins-hbase17:37137] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:05,662 DEBUG [RS:3;jenkins-hbase17:37137] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:05,662 DEBUG [RS:3;jenkins-hbase17:37137] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,37137,1689938164928' 2023-07-21 11:16:05,662 DEBUG [RS:3;jenkins-hbase17:37137] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:05,663 DEBUG [RS:3;jenkins-hbase17:37137] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:05,664 DEBUG [RS:3;jenkins-hbase17:37137] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:05,664 INFO [RS:3;jenkins-hbase17:37137] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:16:05,664 INFO [RS:3;jenkins-hbase17:37137] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:16:05,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:05,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:05,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:05,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:05,722 DEBUG [hconnection-0x4543071c-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:05,745 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34368, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:05,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:05,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:05,788 INFO [RS:3;jenkins-hbase17:37137] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C37137%2C1689938164928, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,37137,1689938164928, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:05,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:05,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:05,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:49392 deadline: 1689939365805, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:05,811 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:05,862 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:05,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:05,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:05,870 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:34719, jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:05,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:05,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:05,901 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:05,917 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:05,917 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:05,926 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBasics(260): testClearNotProcessedDeadServer 2023-07-21 11:16:05,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:05,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:05,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup deadServerGroup 2023-07-21 11:16:05,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:05,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:05,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-21 11:16:05,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:05,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:05,993 INFO [RS:3;jenkins-hbase17:37137] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,37137,1689938164928/jenkins-hbase17.apache.org%2C37137%2C1689938164928.1689938165791 2023-07-21 11:16:05,997 DEBUG [RS:3;jenkins-hbase17:37137] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK]] 2023-07-21 11:16:06,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:06,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:06,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:34719] to rsgroup deadServerGroup 2023-07-21 11:16:06,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:06,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:06,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-21 11:16:06,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:06,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(238): Moving server region 2782e41606006289532e239f665ea4eb, which do not belong to RSGroup deadServerGroup 2023-07-21 11:16:06,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:06,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:06,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:06,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:06,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:06,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE 2023-07-21 11:16:06,088 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE 2023-07-21 11:16:06,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup deadServerGroup 2023-07-21 11:16:06,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:06,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:06,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:06,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:06,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:06,104 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:06,104 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938166104"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938166104"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938166104"}]},"ts":"1689938166104"} 2023-07-21 11:16:06,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 11:16:06,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-21 11:16:06,108 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 11:16:06,112 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,34719,1689938159621}] 2023-07-21 11:16:06,113 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,34719,1689938159621, state=CLOSING 2023-07-21 11:16:06,118 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:06,118 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:06,118 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34719,1689938159621}] 2023-07-21 11:16:06,124 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:06,282 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-21 11:16:06,284 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:16:06,284 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:16:06,284 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:16:06,284 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:16:06,284 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:16:06,285 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.85 KB heapSize=5.58 KB 2023-07-21 11:16:06,497 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.67 KB at sequenceid=15 (bloomFilter=false), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/728cc4f1540e47f282a8d3cbd08b0853 2023-07-21 11:16:07,085 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=15 (bloomFilter=false), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/table/47ab354a4780423db7f93e81451f82da 2023-07-21 11:16:07,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-21 11:16:07,115 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/728cc4f1540e47f282a8d3cbd08b0853 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/728cc4f1540e47f282a8d3cbd08b0853 2023-07-21 11:16:07,150 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/728cc4f1540e47f282a8d3cbd08b0853, entries=21, sequenceid=15, filesize=7.1 K 2023-07-21 11:16:07,160 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/table/47ab354a4780423db7f93e81451f82da as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/47ab354a4780423db7f93e81451f82da 2023-07-21 11:16:07,178 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/47ab354a4780423db7f93e81451f82da, entries=4, sequenceid=15, filesize=4.8 K 2023-07-21 11:16:07,182 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.85 KB/2921, heapSize ~5.30 KB/5424, currentSize=0 B/0 for 1588230740 in 897ms, sequenceid=15, compaction requested=false 2023-07-21 11:16:07,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 11:16:07,218 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=1 2023-07-21 11:16:07,219 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:07,220 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:16:07,220 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:16:07,220 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase17.apache.org,37137,1689938164928 record at close sequenceid=15 2023-07-21 11:16:07,226 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-21 11:16:07,228 WARN [PEWorker-1] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-21 11:16:07,241 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-21 11:16:07,241 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34719,1689938159621 in 1.1100 sec 2023-07-21 11:16:07,244 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,37137,1689938164928; forceNewPlan=false, retain=false 2023-07-21 11:16:07,328 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 11:16:07,328 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-21 11:16:07,329 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:07,329 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-21 11:16:07,329 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 11:16:07,329 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-21 11:16:07,394 INFO [jenkins-hbase17:41077] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:16:07,396 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,37137,1689938164928, state=OPENING 2023-07-21 11:16:07,401 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:07,401 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:07,401 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:07,561 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:07,561 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:07,575 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:53232, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:07,610 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 11:16:07,610 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:07,629 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C37137%2C1689938164928.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,37137,1689938164928, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:07,695 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:07,702 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:07,717 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:07,721 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,37137,1689938164928/jenkins-hbase17.apache.org%2C37137%2C1689938164928.meta.1689938167630.meta 2023-07-21 11:16:07,728 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK]] 2023-07-21 11:16:07,729 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:07,729 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:16:07,729 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 11:16:07,729 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 11:16:07,730 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 11:16:07,730 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:07,730 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 11:16:07,730 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 11:16:07,737 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:16:07,741 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:07,741 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:07,742 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:16:07,765 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/728cc4f1540e47f282a8d3cbd08b0853 2023-07-21 11:16:07,766 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:07,766 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:16:07,769 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:07,769 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:07,770 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:16:07,771 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:07,771 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:16:07,774 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:07,774 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:07,775 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:16:07,791 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/47ab354a4780423db7f93e81451f82da 2023-07-21 11:16:07,791 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:07,793 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:07,801 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:07,808 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:16:07,811 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:16:07,819 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=19; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10873911840, jitterRate=0.012711957097053528}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:16:07,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:16:07,825 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=16, masterSystemTime=1689938167561 2023-07-21 11:16:07,841 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 11:16:07,843 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 11:16:07,845 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,37137,1689938164928, state=OPEN 2023-07-21 11:16:07,846 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:07,846 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:07,857 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-21 11:16:07,858 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,37137,1689938164928 in 445 msec 2023-07-21 11:16:07,862 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 1.7680 sec 2023-07-21 11:16:08,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:08,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2782e41606006289532e239f665ea4eb, disabling compactions & flushes 2023-07-21 11:16:08,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:08,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:08,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. after waiting 0 ms 2023-07-21 11:16:08,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:08,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 2782e41606006289532e239f665ea4eb 1/1 column families, dataSize=1.29 KB heapSize=2.28 KB 2023-07-21 11:16:08,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.29 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/14fcb2495f27487ba67ba2d3cfa299f7 2023-07-21 11:16:08,162 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/14fcb2495f27487ba67ba2d3cfa299f7 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/14fcb2495f27487ba67ba2d3cfa299f7 2023-07-21 11:16:08,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/14fcb2495f27487ba67ba2d3cfa299f7, entries=3, sequenceid=9, filesize=5.1 K 2023-07-21 11:16:08,200 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.29 KB/1317, heapSize ~2.27 KB/2320, currentSize=0 B/0 for 2782e41606006289532e239f665ea4eb in 187ms, sequenceid=9, compaction requested=false 2023-07-21 11:16:08,200 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 11:16:08,211 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:16:08,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-21 11:16:08,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:08,236 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:08,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:08,236 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 2782e41606006289532e239f665ea4eb move to jenkins-hbase17.apache.org,37137,1689938164928 record at close sequenceid=9 2023-07-21 11:16:08,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:08,255 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=CLOSED 2023-07-21 11:16:08,255 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938168255"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938168255"}]},"ts":"1689938168255"} 2023-07-21 11:16:08,256 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34719] ipc.CallRunner(144): callId: 40 service: ClientService methodName: Mutate size: 213 connection: 136.243.18.41:34342 deadline: 1689938228256, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=37137 startCode=1689938164928. As of locationSeqNum=15. 2023-07-21 11:16:08,340 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 11:16:08,363 DEBUG [PEWorker-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:08,365 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:47052, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:08,380 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-21 11:16:08,380 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,34719,1689938159621 in 2.2630 sec 2023-07-21 11:16:08,384 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,37137,1689938164928; forceNewPlan=false, retain=false 2023-07-21 11:16:08,537 INFO [jenkins-hbase17:41077] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:16:08,537 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:08,538 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938168537"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938168537"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938168537"}]},"ts":"1689938168537"} 2023-07-21 11:16:08,542 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=12, state=RUNNABLE; OpenRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:08,700 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:08,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2782e41606006289532e239f665ea4eb, NAME => 'hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:08,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:16:08,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. service=MultiRowMutationService 2023-07-21 11:16:08,701 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 11:16:08,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:08,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:08,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:08,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:08,703 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:08,705 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m 2023-07-21 11:16:08,705 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m 2023-07-21 11:16:08,706 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2782e41606006289532e239f665ea4eb columnFamilyName m 2023-07-21 11:16:08,717 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/14fcb2495f27487ba67ba2d3cfa299f7 2023-07-21 11:16:08,717 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(310): Store=2782e41606006289532e239f665ea4eb/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:08,718 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:08,721 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:08,726 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:08,727 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2782e41606006289532e239f665ea4eb; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@923c8f3, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:08,727 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:08,728 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb., pid=17, masterSystemTime=1689938168696 2023-07-21 11:16:08,730 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:08,731 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:08,731 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:08,732 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938168731"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938168731"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938168731"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938168731"}]},"ts":"1689938168731"} 2023-07-21 11:16:08,738 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=12 2023-07-21 11:16:08,738 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; OpenRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,37137,1689938164928 in 192 msec 2023-07-21 11:16:08,739 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE in 2.6600 sec 2023-07-21 11:16:09,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,34719,1689938159621] are moved back to default 2023-07-21 11:16:09,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(438): Move servers done: default => deadServerGroup 2023-07-21 11:16:09,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:09,121 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34719] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 136.243.18.41:34368 deadline: 1689938229120, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=37137 startCode=1689938164928. As of locationSeqNum=9. 2023-07-21 11:16:09,230 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34719] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:34368 deadline: 1689938229230, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=37137 startCode=1689938164928. As of locationSeqNum=15. 2023-07-21 11:16:09,334 DEBUG [hconnection-0x4543071c-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:09,349 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:47058, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:09,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:09,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:09,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-21 11:16:09,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:09,408 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:09,446 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:41058, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:09,448 INFO [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34719] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,34719,1689938159621' ***** 2023-07-21 11:16:09,448 INFO [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34719] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x2b8fd83 2023-07-21 11:16:09,449 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:09,451 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:09,455 INFO [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:09,478 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:09,502 INFO [RS:2;jenkins-hbase17:34719] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@57bde63a{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:09,509 INFO [RS:2;jenkins-hbase17:34719] server.AbstractConnector(383): Stopped ServerConnector@50ceb1f8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:09,509 INFO [RS:2;jenkins-hbase17:34719] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:09,511 INFO [RS:2;jenkins-hbase17:34719] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6aac43d9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:09,512 INFO [RS:2;jenkins-hbase17:34719] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@392cca42{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:09,534 INFO [RS:2;jenkins-hbase17:34719] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:09,534 INFO [RS:2;jenkins-hbase17:34719] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:09,535 INFO [RS:2;jenkins-hbase17:34719] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:09,535 INFO [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:09,535 DEBUG [RS:2;jenkins-hbase17:34719] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3cab6281 to 127.0.0.1:61077 2023-07-21 11:16:09,535 DEBUG [RS:2;jenkins-hbase17:34719] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:09,535 INFO [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,34719,1689938159621; all regions closed. 2023-07-21 11:16:09,722 DEBUG [RS:2;jenkins-hbase17:34719] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:09,722 INFO [RS:2;jenkins-hbase17:34719] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C34719%2C1689938159621.meta:.meta(num 1689938162467) 2023-07-21 11:16:09,756 DEBUG [RS:2;jenkins-hbase17:34719] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:09,756 INFO [RS:2;jenkins-hbase17:34719] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C34719%2C1689938159621:(num 1689938162255) 2023-07-21 11:16:09,756 DEBUG [RS:2;jenkins-hbase17:34719] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:09,757 INFO [RS:2;jenkins-hbase17:34719] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:09,757 INFO [RS:2;jenkins-hbase17:34719] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:09,757 INFO [RS:2;jenkins-hbase17:34719] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:09,757 INFO [RS:2;jenkins-hbase17:34719] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:09,757 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:09,757 INFO [RS:2;jenkins-hbase17:34719] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:09,770 INFO [RS:2;jenkins-hbase17:34719] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:34719 2023-07-21 11:16:09,797 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:09,797 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:09,797 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:09,797 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:09,800 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:09,800 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:09,800 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:09,803 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 2023-07-21 11:16:09,803 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:09,807 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:09,807 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:09,812 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,34719,1689938159621] 2023-07-21 11:16:09,813 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,34719,1689938159621; numProcessing=1 2023-07-21 11:16:09,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-21 11:16:09,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:09,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:09,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:09,908 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:09,908 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34719-0x101879756880003, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:09,915 INFO [RS:2;jenkins-hbase17:34719] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,34719,1689938159621; zookeeper connection closed. 2023-07-21 11:16:09,915 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:09,916 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@53eb1b9d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@53eb1b9d 2023-07-21 11:16:09,916 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:09,916 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:09,916 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:09,917 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 znode expired, triggering replicatorRemoved event 2023-07-21 11:16:09,917 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:09,917 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 znode expired, triggering replicatorRemoved event 2023-07-21 11:16:09,920 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,34719,1689938159621 already deleted, retry=false 2023-07-21 11:16:09,920 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,34719,1689938159621 on jenkins-hbase17.apache.org,41077,1689938157103 2023-07-21 11:16:09,923 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:09,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:09,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:09,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:09,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:09,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:09,941 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:09,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:09,944 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:09,945 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:09,948 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:09,949 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,34719,1689938159621, splitWal=true, meta=false 2023-07-21 11:16:09,949 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=18 for jenkins-hbase17.apache.org,34719,1689938159621 (carryingMeta=false) jenkins-hbase17.apache.org,34719,1689938159621/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@4bff41b5[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 11:16:09,949 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:09,949 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:09,950 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:09,950 WARN [RS-EventLoopGroup-5-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:34719 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:34719 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:09,954 DEBUG [RS-EventLoopGroup-5-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:34719 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:34719 2023-07-21 11:16:09,953 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase17.apache.org,34719,1689938159621 znode expired, triggering replicatorRemoved event 2023-07-21 11:16:09,950 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:09,957 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:09,958 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:09,958 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:09,972 INFO [PEWorker-5] procedure.ServerCrashProcedure(161): Start pid=18, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,34719,1689938159621, splitWal=true, meta=false 2023-07-21 11:16:09,981 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,34719,1689938159621 had 0 regions 2023-07-21 11:16:09,987 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=18, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,34719,1689938159621, splitWal=true, meta=false, isMeta: false 2023-07-21 11:16:09,990 DEBUG [PEWorker-5] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34719,1689938159621-splitting 2023-07-21 11:16:09,992 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34719,1689938159621-splitting dir is empty, no logs to split. 2023-07-21 11:16:09,992 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase17.apache.org,34719,1689938159621 WAL count=0, meta=false 2023-07-21 11:16:10,005 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34719,1689938159621-splitting dir is empty, no logs to split. 2023-07-21 11:16:10,005 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase17.apache.org,34719,1689938159621 WAL count=0, meta=false 2023-07-21 11:16:10,005 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,34719,1689938159621 WAL splitting is done? wals=0, meta=false 2023-07-21 11:16:10,019 INFO [PEWorker-5] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,34719,1689938159621 failed, ignore...File hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34719,1689938159621-splitting does not exist. 2023-07-21 11:16:10,032 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,34719,1689938159621 after splitting done 2023-07-21 11:16:10,033 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase17.apache.org,34719,1689938159621 from processing; numProcessing=0 2023-07-21 11:16:10,043 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,34719,1689938159621, splitWal=true, meta=false in 108 msec 2023-07-21 11:16:10,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:10,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-21 11:16:10,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 11:16:10,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:10,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:10,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:10,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:10,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:34719] to rsgroup default 2023-07-21 11:16:10,141 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:10,141 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-21 11:16:10,142 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:10,144 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 11:16:10,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(258): Dropping jenkins-hbase17.apache.org:34719 during move-to-default rsgroup because not online 2023-07-21 11:16:10,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:10,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-21 11:16:10,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:10,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group deadServerGroup, current retry=0 2023-07-21 11:16:10,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(261): All regions from [] are moved back to deadServerGroup 2023-07-21 11:16:10,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(438): Move servers done: deadServerGroup => default 2023-07-21 11:16:10,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:10,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup deadServerGroup 2023-07-21 11:16:10,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:10,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:10,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:10,221 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 11:16:10,241 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:16:10,242 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:10,242 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:10,242 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:16:10,243 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:10,243 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:16:10,243 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:16:10,252 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:40467 2023-07-21 11:16:10,253 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:16:10,280 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:16:10,292 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:10,293 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:10,295 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40467 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:10,311 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:404670x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:10,313 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(162): regionserver:404670x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:16:10,316 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(162): regionserver:404670x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 11:16:10,342 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40467-0x10187975688000d connected 2023-07-21 11:16:10,352 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:16:10,364 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40467 2023-07-21 11:16:10,365 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40467 2023-07-21 11:16:10,369 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40467 2023-07-21 11:16:10,376 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40467 2023-07-21 11:16:10,385 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40467 2023-07-21 11:16:10,388 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:16:10,389 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:16:10,389 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:16:10,390 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:16:10,390 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:16:10,390 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:16:10,391 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:16:10,392 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 33271 2023-07-21 11:16:10,392 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:10,454 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:10,454 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@66c4a3b5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:16:10,455 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:10,455 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1bc5d98b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:16:10,623 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:16:10,624 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:16:10,625 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:16:10,625 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:16:10,626 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:10,627 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@dd3dd9f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-33271-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3038900927842590163/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:10,632 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@303421fd{HTTP/1.1, (http/1.1)}{0.0.0.0:33271} 2023-07-21 11:16:10,633 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @19616ms 2023-07-21 11:16:10,661 INFO [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:10,667 DEBUG [RS:4;jenkins-hbase17:40467] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:10,676 DEBUG [RS:4;jenkins-hbase17:40467] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:10,676 DEBUG [RS:4;jenkins-hbase17:40467] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:10,678 DEBUG [RS:4;jenkins-hbase17:40467] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:10,681 DEBUG [RS:4;jenkins-hbase17:40467] zookeeper.ReadOnlyZKClient(139): Connect 0x46f5c2a2 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:10,713 DEBUG [RS:4;jenkins-hbase17:40467] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4832340c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:10,713 DEBUG [RS:4;jenkins-hbase17:40467] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c0da9d2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:10,726 DEBUG [RS:4;jenkins-hbase17:40467] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase17:40467 2023-07-21 11:16:10,726 INFO [RS:4;jenkins-hbase17:40467] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:10,726 INFO [RS:4;jenkins-hbase17:40467] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:10,726 DEBUG [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:10,732 INFO [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,41077,1689938157103 with isa=jenkins-hbase17.apache.org/136.243.18.41:40467, startcode=1689938170241 2023-07-21 11:16:10,733 DEBUG [RS:4;jenkins-hbase17:40467] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:10,752 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57399, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:10,753 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41077] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:10,753 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:10,754 DEBUG [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:10,754 DEBUG [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:10,755 DEBUG [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43969 2023-07-21 11:16:10,757 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:10,760 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:10,760 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:10,761 DEBUG [RS:4;jenkins-hbase17:40467] zookeeper.ZKUtil(162): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:10,761 WARN [RS:4;jenkins-hbase17:40467] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:10,761 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:10,761 INFO [RS:4;jenkins-hbase17:40467] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:10,761 DEBUG [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:10,762 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:10,764 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:10,765 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:10,765 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,40467,1689938170241] 2023-07-21 11:16:10,765 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:10,767 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:10,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:10,768 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 11:16:10,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:10,783 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:10,783 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:10,783 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,41077,1689938157103] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 11:16:10,783 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:10,784 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:10,784 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:10,784 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:10,785 DEBUG [RS:4;jenkins-hbase17:40467] zookeeper.ZKUtil(162): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:10,785 DEBUG [RS:4;jenkins-hbase17:40467] zookeeper.ZKUtil(162): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:10,786 DEBUG [RS:4;jenkins-hbase17:40467] zookeeper.ZKUtil(162): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:10,786 DEBUG [RS:4;jenkins-hbase17:40467] zookeeper.ZKUtil(162): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:10,787 DEBUG [RS:4;jenkins-hbase17:40467] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:10,788 INFO [RS:4;jenkins-hbase17:40467] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:10,790 INFO [RS:4;jenkins-hbase17:40467] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:10,793 INFO [RS:4;jenkins-hbase17:40467] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:10,793 INFO [RS:4;jenkins-hbase17:40467] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:10,811 INFO [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:10,813 INFO [RS:4;jenkins-hbase17:40467] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:10,814 DEBUG [RS:4;jenkins-hbase17:40467] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:10,814 DEBUG [RS:4;jenkins-hbase17:40467] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:10,814 DEBUG [RS:4;jenkins-hbase17:40467] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:10,814 DEBUG [RS:4;jenkins-hbase17:40467] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:10,814 DEBUG [RS:4;jenkins-hbase17:40467] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:10,814 DEBUG [RS:4;jenkins-hbase17:40467] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:10,814 DEBUG [RS:4;jenkins-hbase17:40467] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:10,814 DEBUG [RS:4;jenkins-hbase17:40467] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:10,814 DEBUG [RS:4;jenkins-hbase17:40467] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:10,814 DEBUG [RS:4;jenkins-hbase17:40467] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:10,820 INFO [RS:4;jenkins-hbase17:40467] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:10,820 INFO [RS:4;jenkins-hbase17:40467] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:10,823 INFO [RS:4;jenkins-hbase17:40467] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:10,842 INFO [RS:4;jenkins-hbase17:40467] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:10,842 INFO [RS:4;jenkins-hbase17:40467] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,40467,1689938170241-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:10,862 INFO [RS:4;jenkins-hbase17:40467] regionserver.Replication(203): jenkins-hbase17.apache.org,40467,1689938170241 started 2023-07-21 11:16:10,863 INFO [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,40467,1689938170241, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:40467, sessionid=0x10187975688000d 2023-07-21 11:16:10,863 DEBUG [RS:4;jenkins-hbase17:40467] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:10,863 DEBUG [RS:4;jenkins-hbase17:40467] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:10,863 DEBUG [RS:4;jenkins-hbase17:40467] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,40467,1689938170241' 2023-07-21 11:16:10,863 DEBUG [RS:4;jenkins-hbase17:40467] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:10,864 DEBUG [RS:4;jenkins-hbase17:40467] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:10,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:10,868 DEBUG [RS:4;jenkins-hbase17:40467] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:10,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:10,869 DEBUG [RS:4;jenkins-hbase17:40467] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:10,869 DEBUG [RS:4;jenkins-hbase17:40467] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:10,869 DEBUG [RS:4;jenkins-hbase17:40467] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,40467,1689938170241' 2023-07-21 11:16:10,869 DEBUG [RS:4;jenkins-hbase17:40467] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:10,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:10,870 DEBUG [RS:4;jenkins-hbase17:40467] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:10,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:10,870 DEBUG [RS:4;jenkins-hbase17:40467] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:10,870 INFO [RS:4;jenkins-hbase17:40467] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:16:10,870 INFO [RS:4;jenkins-hbase17:40467] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:16:10,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:10,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:10,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:10,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:10,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:10,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 69 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:49392 deadline: 1689939370884, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:10,887 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:10,889 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:10,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:10,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:10,891 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:10,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:10,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:10,935 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=471 (was 420) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x46f5c2a2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1323183535.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2108709732_17 at /127.0.0.1:51080 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1127106390_17 at /127.0.0.1:39802 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1127106390_17 at /127.0.0.1:48068 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp158509738-768 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x4b141945-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2108709732_17 at /127.0.0.1:48100 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2108709732_17 at /127.0.0.1:51106 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1102726472-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1102726472-635 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1543002837.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp158509738-769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x46f5c2a2-SendThread(127.0.0.1:61077) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp158509738-762 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1543002837.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4543071c-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-374bba88-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase17:37137 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2036392764) connection to localhost.localdomain/127.0.0.1:36511 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:3;jenkins-hbase17:37137-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-11343ed5-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2036392764) connection to localhost.localdomain/127.0.0.1:36511 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1102726472-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1127106390_17 at /127.0.0.1:51944 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:4;jenkins-hbase17:40467-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp158509738-765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae-prefix:jenkins-hbase17.apache.org,37137,1689938164928 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x0a330210-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x4b141945-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1102726472-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1102726472-636-acceptor-0@4445fb9a-ServerConnector@34763ecd{HTTP/1.1, (http/1.1)}{0.0.0.0:46279} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp158509738-763-acceptor-0@64628070-ServerConnector@303421fd{HTTP/1.1, (http/1.1)}{0.0.0.0:33271} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1127106390_17 at /127.0.0.1:48084 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:40467Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2108709732_17 at /127.0.0.1:48024 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:36511 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4543071c-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1127106390_17 at /127.0.0.1:54730 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x0a330210 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1323183535.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x0a330210-SendThread(127.0.0.1:61077) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp158509738-766 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae-prefix:jenkins-hbase17.apache.org,37137,1689938164928.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-metaLookup-shared--pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:37137Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2108709732_17 at /127.0.0.1:51074 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1102726472-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2108709732_17 at /127.0.0.1:54766 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:4;jenkins-hbase17:40467 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1102726472-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp158509738-764 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp158509738-767 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2108709732_17 at /127.0.0.1:54734 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1102726472-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x46f5c2a2-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=746 (was 692) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=810 (was 855), ProcessCount=186 (was 186), AvailableMemoryMB=2156 (was 2722) 2023-07-21 11:16:10,959 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=471, OpenFileDescriptor=746, MaxFileDescriptor=60000, SystemLoadAverage=810, ProcessCount=186, AvailableMemoryMB=2155 2023-07-21 11:16:10,959 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(132): testDefaultNamespaceCreateAndAssign 2023-07-21 11:16:10,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:10,980 INFO [RS:4;jenkins-hbase17:40467] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C40467%2C1689938170241, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40467,1689938170241, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:10,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:10,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:10,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:10,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:10,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:10,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:10,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:10,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:10,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:10,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:11,005 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:11,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:11,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:11,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:11,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:11,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:11,029 DEBUG [RS-EventLoopGroup-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:11,033 DEBUG [RS-EventLoopGroup-8-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:11,033 DEBUG [RS-EventLoopGroup-8-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:11,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:11,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:11,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:11,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:11,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 97 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:49392 deadline: 1689939371047, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:11,048 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:11,054 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:11,055 INFO [RS:4;jenkins-hbase17:40467] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40467,1689938170241/jenkins-hbase17.apache.org%2C40467%2C1689938170241.1689938170980 2023-07-21 11:16:11,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:11,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:11,056 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:11,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:11,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:11,058 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBasics(180): testDefaultNamespaceCreateAndAssign 2023-07-21 11:16:11,060 DEBUG [RS:4;jenkins-hbase17:40467] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK]] 2023-07-21 11:16:11,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$16(3053): Client=jenkins//136.243.18.41 modify {NAME => 'default', hbase.rsgroup.name => 'default'} 2023-07-21 11:16:11,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=default 2023-07-21 11:16:11,102 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 11:16:11,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 11:16:11,107 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; ModifyNamespaceProcedure, namespace=default in 37 msec 2023-07-21 11:16:11,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:11,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:11,127 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=20, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:16:11,135 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:11,136 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:11,137 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:11,145 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=20, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:16:11,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndAssign" procId is: 20 2023-07-21 11:16:11,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 11:16:11,156 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:11,157 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e empty. 2023-07-21 11:16:11,158 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:11,158 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-21 11:16:11,222 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-21 11:16:11,223 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6a009d9d76b1b293dacf510f67bf124e, NAME => 'Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:11,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 11:16:11,267 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:11,268 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing 6a009d9d76b1b293dacf510f67bf124e, disabling compactions & flushes 2023-07-21 11:16:11,268 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. 2023-07-21 11:16:11,268 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. 2023-07-21 11:16:11,268 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. after waiting 0 ms 2023-07-21 11:16:11,268 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. 2023-07-21 11:16:11,268 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. 2023-07-21 11:16:11,268 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for 6a009d9d76b1b293dacf510f67bf124e: 2023-07-21 11:16:11,280 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=20, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:16:11,282 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938171282"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938171282"}]},"ts":"1689938171282"} 2023-07-21 11:16:11,287 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:16:11,288 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=20, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:16:11,289 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938171289"}]},"ts":"1689938171289"} 2023-07-21 11:16:11,291 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-21 11:16:11,311 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:11,311 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:11,311 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:11,311 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:11,311 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 11:16:11,311 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:11,312 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=6a009d9d76b1b293dacf510f67bf124e, ASSIGN}] 2023-07-21 11:16:11,314 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=6a009d9d76b1b293dacf510f67bf124e, ASSIGN 2023-07-21 11:16:11,317 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=6a009d9d76b1b293dacf510f67bf124e, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,37137,1689938164928; forceNewPlan=false, retain=false 2023-07-21 11:16:11,469 INFO [jenkins-hbase17:41077] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:16:11,471 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=6a009d9d76b1b293dacf510f67bf124e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:11,472 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938171471"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938171471"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938171471"}]},"ts":"1689938171471"} 2023-07-21 11:16:11,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 11:16:11,485 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; OpenRegionProcedure 6a009d9d76b1b293dacf510f67bf124e, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:11,675 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. 2023-07-21 11:16:11,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6a009d9d76b1b293dacf510f67bf124e, NAME => 'Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:11,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign 6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:11,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:11,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:11,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:11,679 INFO [StoreOpener-6a009d9d76b1b293dacf510f67bf124e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:11,692 DEBUG [StoreOpener-6a009d9d76b1b293dacf510f67bf124e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e/f 2023-07-21 11:16:11,692 DEBUG [StoreOpener-6a009d9d76b1b293dacf510f67bf124e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e/f 2023-07-21 11:16:11,694 INFO [StoreOpener-6a009d9d76b1b293dacf510f67bf124e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6a009d9d76b1b293dacf510f67bf124e columnFamilyName f 2023-07-21 11:16:11,695 INFO [StoreOpener-6a009d9d76b1b293dacf510f67bf124e-1] regionserver.HStore(310): Store=6a009d9d76b1b293dacf510f67bf124e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:11,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:11,707 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:11,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:11,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:11,723 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 6a009d9d76b1b293dacf510f67bf124e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11114016800, jitterRate=0.035073474049568176}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:11,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 6a009d9d76b1b293dacf510f67bf124e: 2023-07-21 11:16:11,725 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e., pid=22, masterSystemTime=1689938171644 2023-07-21 11:16:11,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. 2023-07-21 11:16:11,729 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. 2023-07-21 11:16:11,730 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=6a009d9d76b1b293dacf510f67bf124e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:11,730 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938171730"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938171730"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938171730"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938171730"}]},"ts":"1689938171730"} 2023-07-21 11:16:11,740 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-21 11:16:11,740 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; OpenRegionProcedure 6a009d9d76b1b293dacf510f67bf124e, server=jenkins-hbase17.apache.org,37137,1689938164928 in 251 msec 2023-07-21 11:16:11,744 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-21 11:16:11,746 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=6a009d9d76b1b293dacf510f67bf124e, ASSIGN in 429 msec 2023-07-21 11:16:11,747 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=20, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:16:11,747 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938171747"}]},"ts":"1689938171747"} 2023-07-21 11:16:11,750 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-21 11:16:11,753 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=20, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:16:11,756 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign in 631 msec 2023-07-21 11:16:11,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 11:16:11,779 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndAssign, procId: 20 completed 2023-07-21 11:16:11,779 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:11,784 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:11,786 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:47074, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:11,791 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:11,821 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:33156, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:11,824 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:11,837 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:43062, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:11,838 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:11,843 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:51868, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:11,854 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndAssign 2023-07-21 11:16:11,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testCreateAndAssign 2023-07-21 11:16:11,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:11,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 11:16:11,883 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938171882"}]},"ts":"1689938171882"} 2023-07-21 11:16:11,888 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-21 11:16:11,892 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testCreateAndAssign to state=DISABLING 2023-07-21 11:16:11,894 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=6a009d9d76b1b293dacf510f67bf124e, UNASSIGN}] 2023-07-21 11:16:11,896 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=6a009d9d76b1b293dacf510f67bf124e, UNASSIGN 2023-07-21 11:16:11,898 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=6a009d9d76b1b293dacf510f67bf124e, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:11,898 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938171898"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938171898"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938171898"}]},"ts":"1689938171898"} 2023-07-21 11:16:11,900 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=24, state=RUNNABLE; CloseRegionProcedure 6a009d9d76b1b293dacf510f67bf124e, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:11,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 11:16:12,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:12,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 6a009d9d76b1b293dacf510f67bf124e, disabling compactions & flushes 2023-07-21 11:16:12,057 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. 2023-07-21 11:16:12,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. 2023-07-21 11:16:12,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. after waiting 0 ms 2023-07-21 11:16:12,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. 2023-07-21 11:16:12,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:12,087 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e. 2023-07-21 11:16:12,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 6a009d9d76b1b293dacf510f67bf124e: 2023-07-21 11:16:12,092 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:12,093 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=6a009d9d76b1b293dacf510f67bf124e, regionState=CLOSED 2023-07-21 11:16:12,093 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689938172093"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938172093"}]},"ts":"1689938172093"} 2023-07-21 11:16:12,101 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=24 2023-07-21 11:16:12,102 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=24, state=SUCCESS; CloseRegionProcedure 6a009d9d76b1b293dacf510f67bf124e, server=jenkins-hbase17.apache.org,37137,1689938164928 in 197 msec 2023-07-21 11:16:12,105 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=23 2023-07-21 11:16:12,105 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=6a009d9d76b1b293dacf510f67bf124e, UNASSIGN in 207 msec 2023-07-21 11:16:12,108 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938172107"}]},"ts":"1689938172107"} 2023-07-21 11:16:12,110 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-21 11:16:12,111 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testCreateAndAssign to state=DISABLED 2023-07-21 11:16:12,119 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign in 249 msec 2023-07-21 11:16:12,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 11:16:12,201 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndAssign, procId: 23 completed 2023-07-21 11:16:12,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testCreateAndAssign 2023-07-21 11:16:12,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:12,237 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=26, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:12,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndAssign' from rsgroup 'default' 2023-07-21 11:16:12,241 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=26, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:12,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:12,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:12,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:12,253 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:12,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=26 2023-07-21 11:16:12,259 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e/recovered.edits] 2023-07-21 11:16:12,266 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e/recovered.edits/4.seqid 2023-07-21 11:16:12,268 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndAssign/6a009d9d76b1b293dacf510f67bf124e 2023-07-21 11:16:12,268 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-21 11:16:12,271 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=26, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:12,298 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndAssign from hbase:meta 2023-07-21 11:16:12,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=26 2023-07-21 11:16:12,414 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndAssign' descriptor. 2023-07-21 11:16:12,419 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=26, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:12,419 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndAssign' from region states. 2023-07-21 11:16:12,419 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938172419"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:12,433 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 11:16:12,434 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6a009d9d76b1b293dacf510f67bf124e, NAME => 'Group_testCreateAndAssign,,1689938171119.6a009d9d76b1b293dacf510f67bf124e.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 11:16:12,434 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndAssign' as deleted. 2023-07-21 11:16:12,434 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938172434"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:12,439 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndAssign state from META 2023-07-21 11:16:12,443 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=26, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:12,446 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign in 225 msec 2023-07-21 11:16:12,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=26 2023-07-21 11:16:12,562 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndAssign, procId: 26 completed 2023-07-21 11:16:12,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:12,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:12,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:12,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:12,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:12,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:12,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:12,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:12,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:12,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:12,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:12,643 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:12,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:12,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:12,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:12,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:12,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:12,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:12,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:12,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:12,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:12,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 161 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939372720, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:12,723 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:12,726 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:12,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:12,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:12,732 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:12,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:12,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:12,775 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=487 (was 471) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741845_1021, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741845_1021, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae-prefix:jenkins-hbase17.apache.org,40467,1689938170241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4543071c-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1392554571_17 at /127.0.0.1:60998 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741845_1021] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741845_1021, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2108709732_17 at /127.0.0.1:51944 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost.localdomain:36511 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1392554571_17 at /127.0.0.1:51954 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741845_1021] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4543071c-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1392554571_17 at /127.0.0.1:39814 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741845_1021] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=759 (was 746) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=810 (was 810), ProcessCount=186 (was 186), AvailableMemoryMB=2021 (was 2155) 2023-07-21 11:16:12,799 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=487, OpenFileDescriptor=759, MaxFileDescriptor=60000, SystemLoadAverage=810, ProcessCount=186, AvailableMemoryMB=2019 2023-07-21 11:16:12,799 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(132): testCreateMultiRegion 2023-07-21 11:16:12,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:12,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:12,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:12,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:12,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:12,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:12,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:12,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:12,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:12,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:12,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:12,851 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:12,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:12,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:12,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:12,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:12,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:12,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:12,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:12,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:12,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:12,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 189 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939372878, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:12,879 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:12,881 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:12,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:12,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:12,889 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:12,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:12,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:12,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:12,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:12,900 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=27, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:16:12,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateMultiRegion" procId is: 27 2023-07-21 11:16:12,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=27 2023-07-21 11:16:12,904 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:12,905 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:12,905 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:12,908 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=27, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:16:12,927 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:12,927 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:12,927 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:12,928 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:12,928 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9 empty. 2023-07-21 11:16:12,929 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e empty. 2023-07-21 11:16:12,929 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:12,929 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:12,930 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:12,930 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40 empty. 2023-07-21 11:16:12,930 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300 2023-07-21 11:16:12,931 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537 empty. 2023-07-21 11:16:12,932 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:12,932 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c empty. 2023-07-21 11:16:12,932 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:12,933 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:12,933 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:12,933 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:12,934 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:12,937 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9 empty. 2023-07-21 11:16:12,938 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8 empty. 2023-07-21 11:16:12,938 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:12,940 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:12,948 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300 empty. 2023-07-21 11:16:12,952 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300 2023-07-21 11:16:12,953 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:12,957 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c empty. 2023-07-21 11:16:12,957 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df empty. 2023-07-21 11:16:12,957 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:12,957 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:12,957 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-21 11:16:12,999 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/.tabledesc/.tableinfo.0000000001 2023-07-21 11:16:13,001 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => 310a0e12e8c78eed458f01b87724c89e, NAME => 'Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:13,001 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 1ba6fb1c8b9ca3f6d638c6d25372eab9, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:13,001 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => 8160c5907f44514700ae33cb307e3f40, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:13,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=27 2023-07-21 11:16:13,065 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:13,069 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 1ba6fb1c8b9ca3f6d638c6d25372eab9, disabling compactions & flushes 2023-07-21 11:16:13,089 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. 2023-07-21 11:16:13,090 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. 2023-07-21 11:16:13,090 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. after waiting 0 ms 2023-07-21 11:16:13,090 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. 2023-07-21 11:16:13,090 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. 2023-07-21 11:16:13,090 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 1ba6fb1c8b9ca3f6d638c6d25372eab9: 2023-07-21 11:16:13,090 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => cdf1f347d5b7f7314366b50840c18537, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:13,125 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:13,125 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing 310a0e12e8c78eed458f01b87724c89e, disabling compactions & flushes 2023-07-21 11:16:13,125 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. 2023-07-21 11:16:13,125 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. 2023-07-21 11:16:13,125 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. after waiting 0 ms 2023-07-21 11:16:13,125 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. 2023-07-21 11:16:13,125 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. 2023-07-21 11:16:13,125 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for 310a0e12e8c78eed458f01b87724c89e: 2023-07-21 11:16:13,126 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => da028d0dd3b64c4dfc6569fd0d999e6c, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:13,134 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:13,135 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing 8160c5907f44514700ae33cb307e3f40, disabling compactions & flushes 2023-07-21 11:16:13,135 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. 2023-07-21 11:16:13,136 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. 2023-07-21 11:16:13,136 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. after waiting 0 ms 2023-07-21 11:16:13,136 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. 2023-07-21 11:16:13,136 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. 2023-07-21 11:16:13,136 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for 8160c5907f44514700ae33cb307e3f40: 2023-07-21 11:16:13,136 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => 5ece16465b7ba9526d3620e0482ced3c, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:13,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=27 2023-07-21 11:16:13,241 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:13,245 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing cdf1f347d5b7f7314366b50840c18537, disabling compactions & flushes 2023-07-21 11:16:13,245 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. 2023-07-21 11:16:13,245 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. 2023-07-21 11:16:13,245 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. after waiting 0 ms 2023-07-21 11:16:13,245 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. 2023-07-21 11:16:13,245 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. 2023-07-21 11:16:13,245 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for cdf1f347d5b7f7314366b50840c18537: 2023-07-21 11:16:13,246 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:16:13,246 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => afc016f3656b887f1d07954a61494300, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:13,266 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:13,266 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing da028d0dd3b64c4dfc6569fd0d999e6c, disabling compactions & flushes 2023-07-21 11:16:13,266 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. 2023-07-21 11:16:13,266 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. 2023-07-21 11:16:13,266 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. after waiting 0 ms 2023-07-21 11:16:13,266 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. 2023-07-21 11:16:13,267 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. 2023-07-21 11:16:13,267 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for da028d0dd3b64c4dfc6569fd0d999e6c: 2023-07-21 11:16:13,267 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => da0dcd8a3e03226381a32dee47d688df, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:13,270 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:13,284 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing 5ece16465b7ba9526d3620e0482ced3c, disabling compactions & flushes 2023-07-21 11:16:13,284 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. 2023-07-21 11:16:13,284 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. 2023-07-21 11:16:13,284 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. after waiting 0 ms 2023-07-21 11:16:13,284 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. 2023-07-21 11:16:13,284 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. 2023-07-21 11:16:13,284 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for 5ece16465b7ba9526d3620e0482ced3c: 2023-07-21 11:16:13,285 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9f0870d9333c22090af6906d223e01e9, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:13,436 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:13,456 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing afc016f3656b887f1d07954a61494300, disabling compactions & flushes 2023-07-21 11:16:13,456 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. 2023-07-21 11:16:13,456 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. 2023-07-21 11:16:13,456 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. after waiting 0 ms 2023-07-21 11:16:13,457 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. 2023-07-21 11:16:13,457 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. 2023-07-21 11:16:13,457 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for afc016f3656b887f1d07954a61494300: 2023-07-21 11:16:13,458 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2265ca53ec03c749164409bc942b21d8, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:13,514 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:13,514 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 2265ca53ec03c749164409bc942b21d8, disabling compactions & flushes 2023-07-21 11:16:13,514 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. 2023-07-21 11:16:13,514 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. 2023-07-21 11:16:13,514 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. after waiting 0 ms 2023-07-21 11:16:13,514 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. 2023-07-21 11:16:13,514 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. 2023-07-21 11:16:13,514 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 2265ca53ec03c749164409bc942b21d8: 2023-07-21 11:16:13,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=27 2023-07-21 11:16:13,837 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:13,837 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing 9f0870d9333c22090af6906d223e01e9, disabling compactions & flushes 2023-07-21 11:16:13,837 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. 2023-07-21 11:16:13,837 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. 2023-07-21 11:16:13,837 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. after waiting 0 ms 2023-07-21 11:16:13,837 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. 2023-07-21 11:16:13,838 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. 2023-07-21 11:16:13,838 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for 9f0870d9333c22090af6906d223e01e9: 2023-07-21 11:16:13,838 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:13,839 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing da0dcd8a3e03226381a32dee47d688df, disabling compactions & flushes 2023-07-21 11:16:13,839 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. 2023-07-21 11:16:13,840 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. 2023-07-21 11:16:13,840 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. after waiting 0 ms 2023-07-21 11:16:13,840 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. 2023-07-21 11:16:13,840 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. 2023-07-21 11:16:13,840 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for da0dcd8a3e03226381a32dee47d688df: 2023-07-21 11:16:13,857 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=27, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:16:13,864 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938173864"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938173864"}]},"ts":"1689938173864"} 2023-07-21 11:16:13,865 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938173864"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938173864"}]},"ts":"1689938173864"} 2023-07-21 11:16:13,865 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689938172895.8160c5907f44514700ae33cb307e3f40.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938173864"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938173864"}]},"ts":"1689938173864"} 2023-07-21 11:16:13,865 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938173864"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938173864"}]},"ts":"1689938173864"} 2023-07-21 11:16:13,865 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938173864"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938173864"}]},"ts":"1689938173864"} 2023-07-21 11:16:13,866 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938173864"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938173864"}]},"ts":"1689938173864"} 2023-07-21 11:16:13,866 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689938172895.afc016f3656b887f1d07954a61494300.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938173864"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938173864"}]},"ts":"1689938173864"} 2023-07-21 11:16:13,866 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689938172895.2265ca53ec03c749164409bc942b21d8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938173864"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938173864"}]},"ts":"1689938173864"} 2023-07-21 11:16:13,866 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938173864"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938173864"}]},"ts":"1689938173864"} 2023-07-21 11:16:13,867 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938173864"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938173864"}]},"ts":"1689938173864"} 2023-07-21 11:16:13,876 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 10 regions to meta. 2023-07-21 11:16:13,882 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=27, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:16:13,882 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938173882"}]},"ts":"1689938173882"} 2023-07-21 11:16:13,886 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLING in hbase:meta 2023-07-21 11:16:13,890 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:13,891 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:13,891 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:13,891 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:13,891 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 11:16:13,891 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:13,892 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=310a0e12e8c78eed458f01b87724c89e, ASSIGN}, {pid=29, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1ba6fb1c8b9ca3f6d638c6d25372eab9, ASSIGN}, {pid=30, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8160c5907f44514700ae33cb307e3f40, ASSIGN}, {pid=31, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cdf1f347d5b7f7314366b50840c18537, ASSIGN}, {pid=32, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da028d0dd3b64c4dfc6569fd0d999e6c, ASSIGN}, {pid=33, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5ece16465b7ba9526d3620e0482ced3c, ASSIGN}, {pid=34, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=afc016f3656b887f1d07954a61494300, ASSIGN}, {pid=35, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da0dcd8a3e03226381a32dee47d688df, ASSIGN}, {pid=36, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f0870d9333c22090af6906d223e01e9, ASSIGN}, {pid=37, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2265ca53ec03c749164409bc942b21d8, ASSIGN}] 2023-07-21 11:16:13,911 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=37, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2265ca53ec03c749164409bc942b21d8, ASSIGN 2023-07-21 11:16:13,912 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da0dcd8a3e03226381a32dee47d688df, ASSIGN 2023-07-21 11:16:13,913 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=36, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f0870d9333c22090af6906d223e01e9, ASSIGN 2023-07-21 11:16:13,916 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=afc016f3656b887f1d07954a61494300, ASSIGN 2023-07-21 11:16:13,919 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5ece16465b7ba9526d3620e0482ced3c, ASSIGN 2023-07-21 11:16:13,920 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=37, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2265ca53ec03c749164409bc942b21d8, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,40467,1689938170241; forceNewPlan=false, retain=false 2023-07-21 11:16:13,925 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=35, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da0dcd8a3e03226381a32dee47d688df, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,37137,1689938164928; forceNewPlan=false, retain=false 2023-07-21 11:16:13,926 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=36, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f0870d9333c22090af6906d223e01e9, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,40783,1689938159262; forceNewPlan=false, retain=false 2023-07-21 11:16:13,928 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=34, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=afc016f3656b887f1d07954a61494300, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,39805,1689938159444; forceNewPlan=false, retain=false 2023-07-21 11:16:13,929 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=33, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5ece16465b7ba9526d3620e0482ced3c, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,37137,1689938164928; forceNewPlan=false, retain=false 2023-07-21 11:16:13,929 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da028d0dd3b64c4dfc6569fd0d999e6c, ASSIGN 2023-07-21 11:16:13,930 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cdf1f347d5b7f7314366b50840c18537, ASSIGN 2023-07-21 11:16:13,932 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8160c5907f44514700ae33cb307e3f40, ASSIGN 2023-07-21 11:16:13,933 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1ba6fb1c8b9ca3f6d638c6d25372eab9, ASSIGN 2023-07-21 11:16:13,933 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=310a0e12e8c78eed458f01b87724c89e, ASSIGN 2023-07-21 11:16:13,934 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=32, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da028d0dd3b64c4dfc6569fd0d999e6c, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,40783,1689938159262; forceNewPlan=false, retain=false 2023-07-21 11:16:13,934 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=31, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cdf1f347d5b7f7314366b50840c18537, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,40467,1689938170241; forceNewPlan=false, retain=false 2023-07-21 11:16:13,935 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=30, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8160c5907f44514700ae33cb307e3f40, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,39805,1689938159444; forceNewPlan=false, retain=false 2023-07-21 11:16:13,939 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=29, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1ba6fb1c8b9ca3f6d638c6d25372eab9, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,40783,1689938159262; forceNewPlan=false, retain=false 2023-07-21 11:16:13,940 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=28, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=310a0e12e8c78eed458f01b87724c89e, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,37137,1689938164928; forceNewPlan=false, retain=false 2023-07-21 11:16:14,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=27 2023-07-21 11:16:14,071 INFO [jenkins-hbase17:41077] balancer.BaseLoadBalancer(1545): Reassigned 10 regions. 10 retained the pre-restart assignment. 2023-07-21 11:16:14,093 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=9f0870d9333c22090af6906d223e01e9, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:14,093 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=2265ca53ec03c749164409bc942b21d8, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:14,093 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174093"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938174093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938174093"}]},"ts":"1689938174093"} 2023-07-21 11:16:14,093 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689938172895.2265ca53ec03c749164409bc942b21d8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938174093"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938174093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938174093"}]},"ts":"1689938174093"} 2023-07-21 11:16:14,093 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=da028d0dd3b64c4dfc6569fd0d999e6c, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:14,094 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=cdf1f347d5b7f7314366b50840c18537, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:14,094 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174093"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938174093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938174093"}]},"ts":"1689938174093"} 2023-07-21 11:16:14,094 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174093"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938174093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938174093"}]},"ts":"1689938174093"} 2023-07-21 11:16:14,093 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=1ba6fb1c8b9ca3f6d638c6d25372eab9, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:14,094 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174093"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938174093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938174093"}]},"ts":"1689938174093"} 2023-07-21 11:16:14,109 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=36, state=RUNNABLE; OpenRegionProcedure 9f0870d9333c22090af6906d223e01e9, server=jenkins-hbase17.apache.org,40783,1689938159262}] 2023-07-21 11:16:14,117 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=afc016f3656b887f1d07954a61494300, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:14,117 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689938172895.afc016f3656b887f1d07954a61494300.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174117"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938174117"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938174117"}]},"ts":"1689938174117"} 2023-07-21 11:16:14,119 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=37, state=RUNNABLE; OpenRegionProcedure 2265ca53ec03c749164409bc942b21d8, server=jenkins-hbase17.apache.org,40467,1689938170241}] 2023-07-21 11:16:14,122 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=32, state=RUNNABLE; OpenRegionProcedure da028d0dd3b64c4dfc6569fd0d999e6c, server=jenkins-hbase17.apache.org,40783,1689938159262}] 2023-07-21 11:16:14,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=31, state=RUNNABLE; OpenRegionProcedure cdf1f347d5b7f7314366b50840c18537, server=jenkins-hbase17.apache.org,40467,1689938170241}] 2023-07-21 11:16:14,128 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=34, state=RUNNABLE; OpenRegionProcedure afc016f3656b887f1d07954a61494300, server=jenkins-hbase17.apache.org,39805,1689938159444}] 2023-07-21 11:16:14,128 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=29, state=RUNNABLE; OpenRegionProcedure 1ba6fb1c8b9ca3f6d638c6d25372eab9, server=jenkins-hbase17.apache.org,40783,1689938159262}] 2023-07-21 11:16:14,134 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=8160c5907f44514700ae33cb307e3f40, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:14,134 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689938172895.8160c5907f44514700ae33cb307e3f40.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174133"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938174133"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938174133"}]},"ts":"1689938174133"} 2023-07-21 11:16:14,139 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=da0dcd8a3e03226381a32dee47d688df, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:14,139 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174138"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938174138"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938174138"}]},"ts":"1689938174138"} 2023-07-21 11:16:14,141 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=5ece16465b7ba9526d3620e0482ced3c, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:14,141 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=310a0e12e8c78eed458f01b87724c89e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:14,141 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174141"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938174141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938174141"}]},"ts":"1689938174141"} 2023-07-21 11:16:14,141 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938174141"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938174141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938174141"}]},"ts":"1689938174141"} 2023-07-21 11:16:14,147 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=30, state=RUNNABLE; OpenRegionProcedure 8160c5907f44514700ae33cb307e3f40, server=jenkins-hbase17.apache.org,39805,1689938159444}] 2023-07-21 11:16:14,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=35, state=RUNNABLE; OpenRegionProcedure da0dcd8a3e03226381a32dee47d688df, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:14,159 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=33, state=RUNNABLE; OpenRegionProcedure 5ece16465b7ba9526d3620e0482ced3c, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:14,160 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=28, state=RUNNABLE; OpenRegionProcedure 310a0e12e8c78eed458f01b87724c89e, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:14,274 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:14,274 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:14,276 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:43068, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:14,289 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:14,289 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:14,305 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:33162, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:14,336 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. 2023-07-21 11:16:14,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2265ca53ec03c749164409bc942b21d8, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''} 2023-07-21 11:16:14,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:14,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:14,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:14,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:14,345 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. 2023-07-21 11:16:14,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1ba6fb1c8b9ca3f6d638c6d25372eab9, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('} 2023-07-21 11:16:14,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:14,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:14,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:14,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:14,373 INFO [StoreOpener-2265ca53ec03c749164409bc942b21d8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:14,385 INFO [StoreOpener-1ba6fb1c8b9ca3f6d638c6d25372eab9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:14,392 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. 2023-07-21 11:16:14,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 310a0e12e8c78eed458f01b87724c89e, NAME => 'Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'} 2023-07-21 11:16:14,392 DEBUG [StoreOpener-2265ca53ec03c749164409bc942b21d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8/f 2023-07-21 11:16:14,393 DEBUG [StoreOpener-2265ca53ec03c749164409bc942b21d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8/f 2023-07-21 11:16:14,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:14,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:14,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:14,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:14,395 INFO [StoreOpener-2265ca53ec03c749164409bc942b21d8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2265ca53ec03c749164409bc942b21d8 columnFamilyName f 2023-07-21 11:16:14,408 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. 2023-07-21 11:16:14,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8160c5907f44514700ae33cb307e3f40, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'} 2023-07-21 11:16:14,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:14,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:14,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:14,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:14,412 DEBUG [StoreOpener-1ba6fb1c8b9ca3f6d638c6d25372eab9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9/f 2023-07-21 11:16:14,412 DEBUG [StoreOpener-1ba6fb1c8b9ca3f6d638c6d25372eab9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9/f 2023-07-21 11:16:14,413 INFO [StoreOpener-2265ca53ec03c749164409bc942b21d8-1] regionserver.HStore(310): Store=2265ca53ec03c749164409bc942b21d8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:14,415 INFO [StoreOpener-310a0e12e8c78eed458f01b87724c89e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:14,416 INFO [StoreOpener-8160c5907f44514700ae33cb307e3f40-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:14,418 DEBUG [StoreOpener-310a0e12e8c78eed458f01b87724c89e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e/f 2023-07-21 11:16:14,418 INFO [StoreOpener-1ba6fb1c8b9ca3f6d638c6d25372eab9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1ba6fb1c8b9ca3f6d638c6d25372eab9 columnFamilyName f 2023-07-21 11:16:14,418 DEBUG [StoreOpener-310a0e12e8c78eed458f01b87724c89e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e/f 2023-07-21 11:16:14,418 INFO [StoreOpener-310a0e12e8c78eed458f01b87724c89e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 310a0e12e8c78eed458f01b87724c89e columnFamilyName f 2023-07-21 11:16:14,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:14,420 INFO [StoreOpener-310a0e12e8c78eed458f01b87724c89e-1] regionserver.HStore(310): Store=310a0e12e8c78eed458f01b87724c89e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:14,422 DEBUG [StoreOpener-8160c5907f44514700ae33cb307e3f40-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40/f 2023-07-21 11:16:14,422 DEBUG [StoreOpener-8160c5907f44514700ae33cb307e3f40-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40/f 2023-07-21 11:16:14,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:14,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:14,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:14,427 INFO [StoreOpener-1ba6fb1c8b9ca3f6d638c6d25372eab9-1] regionserver.HStore(310): Store=1ba6fb1c8b9ca3f6d638c6d25372eab9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:14,427 INFO [StoreOpener-8160c5907f44514700ae33cb307e3f40-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8160c5907f44514700ae33cb307e3f40 columnFamilyName f 2023-07-21 11:16:14,428 INFO [StoreOpener-8160c5907f44514700ae33cb307e3f40-1] regionserver.HStore(310): Store=8160c5907f44514700ae33cb307e3f40/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:14,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:14,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:14,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:14,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:14,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:14,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:14,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:14,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:14,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:14,460 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1ba6fb1c8b9ca3f6d638c6d25372eab9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9716479360, jitterRate=-0.0950823426246643}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:14,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1ba6fb1c8b9ca3f6d638c6d25372eab9: 2023-07-21 11:16:14,468 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9., pid=43, masterSystemTime=1689938174263 2023-07-21 11:16:14,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:14,483 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 8160c5907f44514700ae33cb307e3f40; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11854418080, jitterRate=0.10402871668338776}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:14,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 8160c5907f44514700ae33cb307e3f40: 2023-07-21 11:16:14,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. 2023-07-21 11:16:14,485 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. 2023-07-21 11:16:14,485 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. 2023-07-21 11:16:14,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da028d0dd3b64c4dfc6569fd0d999e6c, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'} 2023-07-21 11:16:14,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:14,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:14,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:14,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:14,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:14,507 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40., pid=44, masterSystemTime=1689938174288 2023-07-21 11:16:14,507 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 310a0e12e8c78eed458f01b87724c89e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11563019840, jitterRate=0.07689014077186584}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:14,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 310a0e12e8c78eed458f01b87724c89e: 2023-07-21 11:16:14,535 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=1ba6fb1c8b9ca3f6d638c6d25372eab9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:14,535 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174512"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938174512"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938174512"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938174512"}]},"ts":"1689938174512"} 2023-07-21 11:16:14,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. 2023-07-21 11:16:14,536 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e., pid=47, masterSystemTime=1689938174336 2023-07-21 11:16:14,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:14,537 INFO [StoreOpener-da028d0dd3b64c4dfc6569fd0d999e6c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:14,538 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=8160c5907f44514700ae33cb307e3f40, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:14,539 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689938172895.8160c5907f44514700ae33cb307e3f40.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174538"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938174538"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938174538"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938174538"}]},"ts":"1689938174538"} 2023-07-21 11:16:14,540 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2265ca53ec03c749164409bc942b21d8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11033845600, jitterRate=0.02760694921016693}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:14,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2265ca53ec03c749164409bc942b21d8: 2023-07-21 11:16:14,542 DEBUG [StoreOpener-da028d0dd3b64c4dfc6569fd0d999e6c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c/f 2023-07-21 11:16:14,542 DEBUG [StoreOpener-da028d0dd3b64c4dfc6569fd0d999e6c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c/f 2023-07-21 11:16:14,543 INFO [StoreOpener-da028d0dd3b64c4dfc6569fd0d999e6c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da028d0dd3b64c4dfc6569fd0d999e6c columnFamilyName f 2023-07-21 11:16:14,545 INFO [StoreOpener-da028d0dd3b64c4dfc6569fd0d999e6c-1] regionserver.HStore(310): Store=da028d0dd3b64c4dfc6569fd0d999e6c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:14,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. 2023-07-21 11:16:14,546 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. 2023-07-21 11:16:14,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:14,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:14,553 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. 2023-07-21 11:16:14,554 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. 2023-07-21 11:16:14,554 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => afc016f3656b887f1d07954a61494300, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'} 2023-07-21 11:16:14,555 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=310a0e12e8c78eed458f01b87724c89e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:14,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. 2023-07-21 11:16:14,555 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938174555"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938174555"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938174555"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938174555"}]},"ts":"1689938174555"} 2023-07-21 11:16:14,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5ece16465b7ba9526d3620e0482ced3c, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'} 2023-07-21 11:16:14,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:14,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:14,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:14,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:14,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion afc016f3656b887f1d07954a61494300 2023-07-21 11:16:14,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:14,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for afc016f3656b887f1d07954a61494300 2023-07-21 11:16:14,556 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8., pid=39, masterSystemTime=1689938174274 2023-07-21 11:16:14,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for afc016f3656b887f1d07954a61494300 2023-07-21 11:16:14,559 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=29 2023-07-21 11:16:14,559 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=29, state=SUCCESS; OpenRegionProcedure 1ba6fb1c8b9ca3f6d638c6d25372eab9, server=jenkins-hbase17.apache.org,40783,1689938159262 in 410 msec 2023-07-21 11:16:14,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:14,569 INFO [StoreOpener-5ece16465b7ba9526d3620e0482ced3c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:14,583 DEBUG [StoreOpener-5ece16465b7ba9526d3620e0482ced3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c/f 2023-07-21 11:16:14,584 DEBUG [StoreOpener-5ece16465b7ba9526d3620e0482ced3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c/f 2023-07-21 11:16:14,584 INFO [StoreOpener-5ece16465b7ba9526d3620e0482ced3c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5ece16465b7ba9526d3620e0482ced3c columnFamilyName f 2023-07-21 11:16:14,592 INFO [StoreOpener-afc016f3656b887f1d07954a61494300-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region afc016f3656b887f1d07954a61494300 2023-07-21 11:16:14,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:14,594 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=30 2023-07-21 11:16:14,594 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=30, state=SUCCESS; OpenRegionProcedure 8160c5907f44514700ae33cb307e3f40, server=jenkins-hbase17.apache.org,39805,1689938159444 in 397 msec 2023-07-21 11:16:14,595 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened da028d0dd3b64c4dfc6569fd0d999e6c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9653170080, jitterRate=-0.10097847878932953}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:14,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for da028d0dd3b64c4dfc6569fd0d999e6c: 2023-07-21 11:16:14,596 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1ba6fb1c8b9ca3f6d638c6d25372eab9, ASSIGN in 668 msec 2023-07-21 11:16:14,600 INFO [StoreOpener-5ece16465b7ba9526d3620e0482ced3c-1] regionserver.HStore(310): Store=5ece16465b7ba9526d3620e0482ced3c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:14,603 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8160c5907f44514700ae33cb307e3f40, ASSIGN in 703 msec 2023-07-21 11:16:14,602 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c., pid=40, masterSystemTime=1689938174263 2023-07-21 11:16:14,605 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:14,605 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:14,605 DEBUG [StoreOpener-afc016f3656b887f1d07954a61494300-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300/f 2023-07-21 11:16:14,605 DEBUG [StoreOpener-afc016f3656b887f1d07954a61494300-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300/f 2023-07-21 11:16:14,606 INFO [StoreOpener-afc016f3656b887f1d07954a61494300-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region afc016f3656b887f1d07954a61494300 columnFamilyName f 2023-07-21 11:16:14,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. 2023-07-21 11:16:14,607 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=28 2023-07-21 11:16:14,607 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=28, state=SUCCESS; OpenRegionProcedure 310a0e12e8c78eed458f01b87724c89e, server=jenkins-hbase17.apache.org,37137,1689938164928 in 410 msec 2023-07-21 11:16:14,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. 2023-07-21 11:16:14,611 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. 2023-07-21 11:16:14,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cdf1f347d5b7f7314366b50840c18537, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'} 2023-07-21 11:16:14,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:14,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:14,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:14,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:14,618 INFO [StoreOpener-cdf1f347d5b7f7314366b50840c18537-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:14,621 DEBUG [StoreOpener-cdf1f347d5b7f7314366b50840c18537-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537/f 2023-07-21 11:16:14,621 DEBUG [StoreOpener-cdf1f347d5b7f7314366b50840c18537-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537/f 2023-07-21 11:16:14,622 INFO [StoreOpener-cdf1f347d5b7f7314366b50840c18537-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cdf1f347d5b7f7314366b50840c18537 columnFamilyName f 2023-07-21 11:16:14,625 INFO [StoreOpener-afc016f3656b887f1d07954a61494300-1] regionserver.HStore(310): Store=afc016f3656b887f1d07954a61494300/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:14,626 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=2265ca53ec03c749164409bc942b21d8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:14,626 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689938172895.2265ca53ec03c749164409bc942b21d8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938174626"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938174626"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938174626"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938174626"}]},"ts":"1689938174626"} 2023-07-21 11:16:14,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300 2023-07-21 11:16:14,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300 2023-07-21 11:16:14,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. 2023-07-21 11:16:14,628 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. 2023-07-21 11:16:14,629 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. 2023-07-21 11:16:14,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9f0870d9333c22090af6906d223e01e9, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'} 2023-07-21 11:16:14,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:14,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:14,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:14,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:14,631 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=da028d0dd3b64c4dfc6569fd0d999e6c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:14,632 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174631"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938174631"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938174631"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938174631"}]},"ts":"1689938174631"} 2023-07-21 11:16:14,633 INFO [StoreOpener-cdf1f347d5b7f7314366b50840c18537-1] regionserver.HStore(310): Store=cdf1f347d5b7f7314366b50840c18537/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:14,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:14,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:14,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:14,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for afc016f3656b887f1d07954a61494300 2023-07-21 11:16:14,638 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=310a0e12e8c78eed458f01b87724c89e, ASSIGN in 716 msec 2023-07-21 11:16:14,640 INFO [StoreOpener-9f0870d9333c22090af6906d223e01e9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:14,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:14,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:14,641 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 5ece16465b7ba9526d3620e0482ced3c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11275929760, jitterRate=0.050152793526649475}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:14,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 5ece16465b7ba9526d3620e0482ced3c: 2023-07-21 11:16:14,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:14,643 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened afc016f3656b887f1d07954a61494300; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11249034560, jitterRate=0.04764798283576965}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:14,643 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for afc016f3656b887f1d07954a61494300: 2023-07-21 11:16:14,644 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c., pid=46, masterSystemTime=1689938174336 2023-07-21 11:16:14,648 DEBUG [StoreOpener-9f0870d9333c22090af6906d223e01e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9/f 2023-07-21 11:16:14,649 DEBUG [StoreOpener-9f0870d9333c22090af6906d223e01e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9/f 2023-07-21 11:16:14,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:14,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. 2023-07-21 11:16:14,649 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. 2023-07-21 11:16:14,649 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. 2023-07-21 11:16:14,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da0dcd8a3e03226381a32dee47d688df, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'} 2023-07-21 11:16:14,650 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300., pid=42, masterSystemTime=1689938174288 2023-07-21 11:16:14,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:14,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:14,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:14,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:14,650 INFO [StoreOpener-9f0870d9333c22090af6906d223e01e9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9f0870d9333c22090af6906d223e01e9 columnFamilyName f 2023-07-21 11:16:14,651 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened cdf1f347d5b7f7314366b50840c18537; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10643393600, jitterRate=-0.00875672698020935}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:14,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for cdf1f347d5b7f7314366b50840c18537: 2023-07-21 11:16:14,652 INFO [StoreOpener-9f0870d9333c22090af6906d223e01e9-1] regionserver.HStore(310): Store=9f0870d9333c22090af6906d223e01e9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:14,653 INFO [StoreOpener-da0dcd8a3e03226381a32dee47d688df-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:14,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:14,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:14,654 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=5ece16465b7ba9526d3620e0482ced3c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:14,655 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174654"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938174654"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938174654"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938174654"}]},"ts":"1689938174654"} 2023-07-21 11:16:14,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. 2023-07-21 11:16:14,657 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. 2023-07-21 11:16:14,663 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=37 2023-07-21 11:16:14,663 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=37, state=SUCCESS; OpenRegionProcedure 2265ca53ec03c749164409bc942b21d8, server=jenkins-hbase17.apache.org,40467,1689938170241 in 517 msec 2023-07-21 11:16:14,663 DEBUG [StoreOpener-da0dcd8a3e03226381a32dee47d688df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df/f 2023-07-21 11:16:14,664 DEBUG [StoreOpener-da0dcd8a3e03226381a32dee47d688df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df/f 2023-07-21 11:16:14,664 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537., pid=41, masterSystemTime=1689938174274 2023-07-21 11:16:14,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:14,665 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=afc016f3656b887f1d07954a61494300, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:14,665 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689938172895.afc016f3656b887f1d07954a61494300.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174665"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938174665"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938174665"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938174665"}]},"ts":"1689938174665"} 2023-07-21 11:16:14,666 INFO [StoreOpener-da0dcd8a3e03226381a32dee47d688df-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da0dcd8a3e03226381a32dee47d688df columnFamilyName f 2023-07-21 11:16:14,667 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=32 2023-07-21 11:16:14,667 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=32, state=SUCCESS; OpenRegionProcedure da028d0dd3b64c4dfc6569fd0d999e6c, server=jenkins-hbase17.apache.org,40783,1689938159262 in 525 msec 2023-07-21 11:16:14,669 INFO [StoreOpener-da0dcd8a3e03226381a32dee47d688df-1] regionserver.HStore(310): Store=da0dcd8a3e03226381a32dee47d688df/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:14,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:14,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:14,679 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2265ca53ec03c749164409bc942b21d8, ASSIGN in 772 msec 2023-07-21 11:16:14,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. 2023-07-21 11:16:14,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:14,680 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. 2023-07-21 11:16:14,682 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=cdf1f347d5b7f7314366b50840c18537, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:14,683 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174682"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938174682"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938174682"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938174682"}]},"ts":"1689938174682"} 2023-07-21 11:16:14,683 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=33 2023-07-21 11:16:14,683 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=33, state=SUCCESS; OpenRegionProcedure 5ece16465b7ba9526d3620e0482ced3c, server=jenkins-hbase17.apache.org,37137,1689938164928 in 503 msec 2023-07-21 11:16:14,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:14,684 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 9f0870d9333c22090af6906d223e01e9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9723448320, jitterRate=-0.09443330764770508}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:14,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 9f0870d9333c22090af6906d223e01e9: 2023-07-21 11:16:14,683 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da028d0dd3b64c4dfc6569fd0d999e6c, ASSIGN in 776 msec 2023-07-21 11:16:14,687 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9., pid=38, masterSystemTime=1689938174263 2023-07-21 11:16:14,688 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=34 2023-07-21 11:16:14,688 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=34, state=SUCCESS; OpenRegionProcedure afc016f3656b887f1d07954a61494300, server=jenkins-hbase17.apache.org,39805,1689938159444 in 542 msec 2023-07-21 11:16:14,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:14,690 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5ece16465b7ba9526d3620e0482ced3c, ASSIGN in 792 msec 2023-07-21 11:16:14,690 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened da0dcd8a3e03226381a32dee47d688df; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11302307840, jitterRate=0.05260944366455078}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:14,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for da0dcd8a3e03226381a32dee47d688df: 2023-07-21 11:16:14,691 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df., pid=45, masterSystemTime=1689938174336 2023-07-21 11:16:14,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. 2023-07-21 11:16:14,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. 2023-07-21 11:16:14,692 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=afc016f3656b887f1d07954a61494300, ASSIGN in 797 msec 2023-07-21 11:16:14,692 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=9f0870d9333c22090af6906d223e01e9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:14,692 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174692"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938174692"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938174692"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938174692"}]},"ts":"1689938174692"} 2023-07-21 11:16:14,697 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=31 2023-07-21 11:16:14,697 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=31, state=SUCCESS; OpenRegionProcedure cdf1f347d5b7f7314366b50840c18537, server=jenkins-hbase17.apache.org,40467,1689938170241 in 564 msec 2023-07-21 11:16:14,699 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. 2023-07-21 11:16:14,699 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. 2023-07-21 11:16:14,706 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=da0dcd8a3e03226381a32dee47d688df, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:14,706 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938174706"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938174706"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938174706"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938174706"}]},"ts":"1689938174706"} 2023-07-21 11:16:14,717 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cdf1f347d5b7f7314366b50840c18537, ASSIGN in 806 msec 2023-07-21 11:16:14,723 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=36 2023-07-21 11:16:14,724 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=36, state=SUCCESS; OpenRegionProcedure 9f0870d9333c22090af6906d223e01e9, server=jenkins-hbase17.apache.org,40783,1689938159262 in 588 msec 2023-07-21 11:16:14,731 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=35 2023-07-21 11:16:14,731 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=35, state=SUCCESS; OpenRegionProcedure da0dcd8a3e03226381a32dee47d688df, server=jenkins-hbase17.apache.org,37137,1689938164928 in 568 msec 2023-07-21 11:16:14,733 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f0870d9333c22090af6906d223e01e9, ASSIGN in 833 msec 2023-07-21 11:16:14,739 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=27 2023-07-21 11:16:14,739 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da0dcd8a3e03226381a32dee47d688df, ASSIGN in 840 msec 2023-07-21 11:16:14,742 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=27, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:16:14,742 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938174742"}]},"ts":"1689938174742"} 2023-07-21 11:16:14,746 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLED in hbase:meta 2023-07-21 11:16:14,752 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=27, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:16:14,769 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion in 1.8570 sec 2023-07-21 11:16:15,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=27 2023-07-21 11:16:15,047 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateMultiRegion, procId: 27 completed 2023-07-21 11:16:15,047 DEBUG [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateMultiRegion get assigned. Timeout = 60000ms 2023-07-21 11:16:15,048 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:15,060 WARN [RPCClient-NioEventLoopGroup-6-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:34719 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:34719 Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hbase.thirdparty.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:15,065 DEBUG [RPCClient-NioEventLoopGroup-6-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:34719 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:34719 2023-07-21 11:16:15,197 DEBUG [hconnection-0x2b8fd83-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:15,200 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:47086, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:15,208 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateMultiRegion assigned to meta. Checking AM states. 2023-07-21 11:16:15,209 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:15,209 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateMultiRegion assigned. 2023-07-21 11:16:15,214 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$15(890): Started disable of Group_testCreateMultiRegion 2023-07-21 11:16:15,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testCreateMultiRegion 2023-07-21 11:16:15,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=48, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:15,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=48 2023-07-21 11:16:15,225 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938175225"}]},"ts":"1689938175225"} 2023-07-21 11:16:15,232 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLING in hbase:meta 2023-07-21 11:16:15,234 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testCreateMultiRegion to state=DISABLING 2023-07-21 11:16:15,239 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1ba6fb1c8b9ca3f6d638c6d25372eab9, UNASSIGN}, {pid=50, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8160c5907f44514700ae33cb307e3f40, UNASSIGN}, {pid=51, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cdf1f347d5b7f7314366b50840c18537, UNASSIGN}, {pid=52, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da028d0dd3b64c4dfc6569fd0d999e6c, UNASSIGN}, {pid=53, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5ece16465b7ba9526d3620e0482ced3c, UNASSIGN}, {pid=54, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=afc016f3656b887f1d07954a61494300, UNASSIGN}, {pid=55, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da0dcd8a3e03226381a32dee47d688df, UNASSIGN}, {pid=56, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f0870d9333c22090af6906d223e01e9, UNASSIGN}, {pid=57, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2265ca53ec03c749164409bc942b21d8, UNASSIGN}, {pid=58, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=310a0e12e8c78eed458f01b87724c89e, UNASSIGN}] 2023-07-21 11:16:15,243 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1ba6fb1c8b9ca3f6d638c6d25372eab9, UNASSIGN 2023-07-21 11:16:15,246 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=1ba6fb1c8b9ca3f6d638c6d25372eab9, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:15,246 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175246"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938175246"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938175246"}]},"ts":"1689938175246"} 2023-07-21 11:16:15,247 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8160c5907f44514700ae33cb307e3f40, UNASSIGN 2023-07-21 11:16:15,249 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cdf1f347d5b7f7314366b50840c18537, UNASSIGN 2023-07-21 11:16:15,251 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=8160c5907f44514700ae33cb307e3f40, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:15,251 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689938172895.8160c5907f44514700ae33cb307e3f40.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175251"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938175251"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938175251"}]},"ts":"1689938175251"} 2023-07-21 11:16:15,252 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=49, state=RUNNABLE; CloseRegionProcedure 1ba6fb1c8b9ca3f6d638c6d25372eab9, server=jenkins-hbase17.apache.org,40783,1689938159262}] 2023-07-21 11:16:15,252 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da028d0dd3b64c4dfc6569fd0d999e6c, UNASSIGN 2023-07-21 11:16:15,253 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=cdf1f347d5b7f7314366b50840c18537, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:15,253 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175253"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938175253"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938175253"}]},"ts":"1689938175253"} 2023-07-21 11:16:15,258 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=50, state=RUNNABLE; CloseRegionProcedure 8160c5907f44514700ae33cb307e3f40, server=jenkins-hbase17.apache.org,39805,1689938159444}] 2023-07-21 11:16:15,260 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=51, state=RUNNABLE; CloseRegionProcedure cdf1f347d5b7f7314366b50840c18537, server=jenkins-hbase17.apache.org,40467,1689938170241}] 2023-07-21 11:16:15,261 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=310a0e12e8c78eed458f01b87724c89e, UNASSIGN 2023-07-21 11:16:15,269 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=da028d0dd3b64c4dfc6569fd0d999e6c, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:15,269 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175269"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938175269"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938175269"}]},"ts":"1689938175269"} 2023-07-21 11:16:15,272 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=310a0e12e8c78eed458f01b87724c89e, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:15,272 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938175272"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938175272"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938175272"}]},"ts":"1689938175272"} 2023-07-21 11:16:15,279 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=52, state=RUNNABLE; CloseRegionProcedure da028d0dd3b64c4dfc6569fd0d999e6c, server=jenkins-hbase17.apache.org,40783,1689938159262}] 2023-07-21 11:16:15,281 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=58, state=RUNNABLE; CloseRegionProcedure 310a0e12e8c78eed458f01b87724c89e, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:15,296 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2265ca53ec03c749164409bc942b21d8, UNASSIGN 2023-07-21 11:16:15,297 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f0870d9333c22090af6906d223e01e9, UNASSIGN 2023-07-21 11:16:15,304 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da0dcd8a3e03226381a32dee47d688df, UNASSIGN 2023-07-21 11:16:15,304 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=2265ca53ec03c749164409bc942b21d8, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:15,305 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=afc016f3656b887f1d07954a61494300, UNASSIGN 2023-07-21 11:16:15,305 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689938172895.2265ca53ec03c749164409bc942b21d8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938175304"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938175304"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938175304"}]},"ts":"1689938175304"} 2023-07-21 11:16:15,306 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=9f0870d9333c22090af6906d223e01e9, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:15,307 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175306"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938175306"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938175306"}]},"ts":"1689938175306"} 2023-07-21 11:16:15,313 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=afc016f3656b887f1d07954a61494300, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:15,313 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689938172895.afc016f3656b887f1d07954a61494300.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175313"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938175313"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938175313"}]},"ts":"1689938175313"} 2023-07-21 11:16:15,314 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=da0dcd8a3e03226381a32dee47d688df, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:15,315 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175313"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938175313"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938175313"}]},"ts":"1689938175313"} 2023-07-21 11:16:15,315 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=57, state=RUNNABLE; CloseRegionProcedure 2265ca53ec03c749164409bc942b21d8, server=jenkins-hbase17.apache.org,40467,1689938170241}] 2023-07-21 11:16:15,316 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=56, state=RUNNABLE; CloseRegionProcedure 9f0870d9333c22090af6906d223e01e9, server=jenkins-hbase17.apache.org,40783,1689938159262}] 2023-07-21 11:16:15,316 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5ece16465b7ba9526d3620e0482ced3c, UNASSIGN 2023-07-21 11:16:15,323 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=5ece16465b7ba9526d3620e0482ced3c, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:15,323 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175323"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938175323"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938175323"}]},"ts":"1689938175323"} 2023-07-21 11:16:15,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=48 2023-07-21 11:16:15,326 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=54, state=RUNNABLE; CloseRegionProcedure afc016f3656b887f1d07954a61494300, server=jenkins-hbase17.apache.org,39805,1689938159444}] 2023-07-21 11:16:15,329 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=55, state=RUNNABLE; CloseRegionProcedure da0dcd8a3e03226381a32dee47d688df, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:15,331 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=53, state=RUNNABLE; CloseRegionProcedure 5ece16465b7ba9526d3620e0482ced3c, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:15,445 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:15,465 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:15,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:15,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:15,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 9f0870d9333c22090af6906d223e01e9, disabling compactions & flushes 2023-07-21 11:16:15,476 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. 2023-07-21 11:16:15,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. 2023-07-21 11:16:15,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. after waiting 0 ms 2023-07-21 11:16:15,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. 2023-07-21 11:16:15,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing da0dcd8a3e03226381a32dee47d688df, disabling compactions & flushes 2023-07-21 11:16:15,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. 2023-07-21 11:16:15,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. 2023-07-21 11:16:15,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. after waiting 0 ms 2023-07-21 11:16:15,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. 2023-07-21 11:16:15,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing cdf1f347d5b7f7314366b50840c18537, disabling compactions & flushes 2023-07-21 11:16:15,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. 2023-07-21 11:16:15,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. 2023-07-21 11:16:15,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. after waiting 0 ms 2023-07-21 11:16:15,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. 2023-07-21 11:16:15,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 8160c5907f44514700ae33cb307e3f40, disabling compactions & flushes 2023-07-21 11:16:15,494 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. 2023-07-21 11:16:15,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. 2023-07-21 11:16:15,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. after waiting 0 ms 2023-07-21 11:16:15,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. 2023-07-21 11:16:15,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:15,518 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df. 2023-07-21 11:16:15,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for da0dcd8a3e03226381a32dee47d688df: 2023-07-21 11:16:15,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:15,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=48 2023-07-21 11:16:15,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:15,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:15,544 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 310a0e12e8c78eed458f01b87724c89e, disabling compactions & flushes 2023-07-21 11:16:15,545 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. 2023-07-21 11:16:15,545 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. 2023-07-21 11:16:15,545 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. after waiting 0 ms 2023-07-21 11:16:15,545 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. 2023-07-21 11:16:15,549 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=da0dcd8a3e03226381a32dee47d688df, regionState=CLOSED 2023-07-21 11:16:15,549 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175548"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938175548"}]},"ts":"1689938175548"} 2023-07-21 11:16:15,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9. 2023-07-21 11:16:15,552 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 9f0870d9333c22090af6906d223e01e9: 2023-07-21 11:16:15,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:15,558 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:15,559 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=9f0870d9333c22090af6906d223e01e9, regionState=CLOSED 2023-07-21 11:16:15,559 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175559"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938175559"}]},"ts":"1689938175559"} 2023-07-21 11:16:15,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing da028d0dd3b64c4dfc6569fd0d999e6c, disabling compactions & flushes 2023-07-21 11:16:15,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. 2023-07-21 11:16:15,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. 2023-07-21 11:16:15,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. after waiting 0 ms 2023-07-21 11:16:15,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. 2023-07-21 11:16:15,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:15,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:15,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40. 2023-07-21 11:16:15,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 8160c5907f44514700ae33cb307e3f40: 2023-07-21 11:16:15,600 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=55 2023-07-21 11:16:15,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e. 2023-07-21 11:16:15,601 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=55, state=SUCCESS; CloseRegionProcedure da0dcd8a3e03226381a32dee47d688df, server=jenkins-hbase17.apache.org,37137,1689938164928 in 227 msec 2023-07-21 11:16:15,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 310a0e12e8c78eed458f01b87724c89e: 2023-07-21 11:16:15,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:15,605 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537. 2023-07-21 11:16:15,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for cdf1f347d5b7f7314366b50840c18537: 2023-07-21 11:16:15,606 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=56 2023-07-21 11:16:15,606 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=56, state=SUCCESS; CloseRegionProcedure 9f0870d9333c22090af6906d223e01e9, server=jenkins-hbase17.apache.org,40783,1689938159262 in 247 msec 2023-07-21 11:16:15,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:15,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:15,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 5ece16465b7ba9526d3620e0482ced3c, disabling compactions & flushes 2023-07-21 11:16:15,609 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. 2023-07-21 11:16:15,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. 2023-07-21 11:16:15,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. after waiting 0 ms 2023-07-21 11:16:15,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. 2023-07-21 11:16:15,610 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da0dcd8a3e03226381a32dee47d688df, UNASSIGN in 362 msec 2023-07-21 11:16:15,610 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=310a0e12e8c78eed458f01b87724c89e, regionState=CLOSED 2023-07-21 11:16:15,611 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938175610"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938175610"}]},"ts":"1689938175610"} 2023-07-21 11:16:15,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:15,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close afc016f3656b887f1d07954a61494300 2023-07-21 11:16:15,616 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:15,616 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:15,616 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=8160c5907f44514700ae33cb307e3f40, regionState=CLOSED 2023-07-21 11:16:15,617 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689938172895.8160c5907f44514700ae33cb307e3f40.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938175616"}]},"ts":"1689938175616"} 2023-07-21 11:16:15,617 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f0870d9333c22090af6906d223e01e9, UNASSIGN in 368 msec 2023-07-21 11:16:15,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing afc016f3656b887f1d07954a61494300, disabling compactions & flushes 2023-07-21 11:16:15,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. 2023-07-21 11:16:15,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. 2023-07-21 11:16:15,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2265ca53ec03c749164409bc942b21d8, disabling compactions & flushes 2023-07-21 11:16:15,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. 2023-07-21 11:16:15,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. 2023-07-21 11:16:15,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. after waiting 0 ms 2023-07-21 11:16:15,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. 2023-07-21 11:16:15,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. after waiting 0 ms 2023-07-21 11:16:15,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. 2023-07-21 11:16:15,654 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=cdf1f347d5b7f7314366b50840c18537, regionState=CLOSED 2023-07-21 11:16:15,655 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175654"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938175654"}]},"ts":"1689938175654"} 2023-07-21 11:16:15,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:15,675 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300. 2023-07-21 11:16:15,675 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for afc016f3656b887f1d07954a61494300: 2023-07-21 11:16:15,675 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=50 2023-07-21 11:16:15,676 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=50, state=SUCCESS; CloseRegionProcedure 8160c5907f44514700ae33cb307e3f40, server=jenkins-hbase17.apache.org,39805,1689938159444 in 389 msec 2023-07-21 11:16:15,675 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=58 2023-07-21 11:16:15,677 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; CloseRegionProcedure 310a0e12e8c78eed458f01b87724c89e, server=jenkins-hbase17.apache.org,37137,1689938164928 in 335 msec 2023-07-21 11:16:15,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:15,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:15,681 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed afc016f3656b887f1d07954a61494300 2023-07-21 11:16:15,681 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c. 2023-07-21 11:16:15,681 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 5ece16465b7ba9526d3620e0482ced3c: 2023-07-21 11:16:15,681 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c. 2023-07-21 11:16:15,681 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for da028d0dd3b64c4dfc6569fd0d999e6c: 2023-07-21 11:16:15,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:15,691 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8. 2023-07-21 11:16:15,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2265ca53ec03c749164409bc942b21d8: 2023-07-21 11:16:15,692 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=51 2023-07-21 11:16:15,692 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=51, state=SUCCESS; CloseRegionProcedure cdf1f347d5b7f7314366b50840c18537, server=jenkins-hbase17.apache.org,40467,1689938170241 in 405 msec 2023-07-21 11:16:15,693 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8160c5907f44514700ae33cb307e3f40, UNASSIGN in 440 msec 2023-07-21 11:16:15,693 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=310a0e12e8c78eed458f01b87724c89e, UNASSIGN in 437 msec 2023-07-21 11:16:15,705 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=afc016f3656b887f1d07954a61494300, regionState=CLOSED 2023-07-21 11:16:15,706 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689938172895.afc016f3656b887f1d07954a61494300.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175705"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938175705"}]},"ts":"1689938175705"} 2023-07-21 11:16:15,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:15,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:15,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1ba6fb1c8b9ca3f6d638c6d25372eab9, disabling compactions & flushes 2023-07-21 11:16:15,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. 2023-07-21 11:16:15,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. 2023-07-21 11:16:15,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. after waiting 0 ms 2023-07-21 11:16:15,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. 2023-07-21 11:16:15,718 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=da028d0dd3b64c4dfc6569fd0d999e6c, regionState=CLOSED 2023-07-21 11:16:15,719 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938175718"}]},"ts":"1689938175718"} 2023-07-21 11:16:15,719 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:15,725 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cdf1f347d5b7f7314366b50840c18537, UNASSIGN in 456 msec 2023-07-21 11:16:15,726 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:15,726 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=5ece16465b7ba9526d3620e0482ced3c, regionState=CLOSED 2023-07-21 11:16:15,727 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175726"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938175726"}]},"ts":"1689938175726"} 2023-07-21 11:16:15,729 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=54 2023-07-21 11:16:15,729 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=54, state=SUCCESS; CloseRegionProcedure afc016f3656b887f1d07954a61494300, server=jenkins-hbase17.apache.org,39805,1689938159444 in 384 msec 2023-07-21 11:16:15,730 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=2265ca53ec03c749164409bc942b21d8, regionState=CLOSED 2023-07-21 11:16:15,730 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689938172895.2265ca53ec03c749164409bc942b21d8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689938175730"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938175730"}]},"ts":"1689938175730"} 2023-07-21 11:16:15,735 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=52 2023-07-21 11:16:15,735 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=52, state=SUCCESS; CloseRegionProcedure da028d0dd3b64c4dfc6569fd0d999e6c, server=jenkins-hbase17.apache.org,40783,1689938159262 in 445 msec 2023-07-21 11:16:15,741 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=afc016f3656b887f1d07954a61494300, UNASSIGN in 491 msec 2023-07-21 11:16:15,743 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=53 2023-07-21 11:16:15,743 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=53, state=SUCCESS; CloseRegionProcedure 5ece16465b7ba9526d3620e0482ced3c, server=jenkins-hbase17.apache.org,37137,1689938164928 in 398 msec 2023-07-21 11:16:15,745 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da028d0dd3b64c4dfc6569fd0d999e6c, UNASSIGN in 496 msec 2023-07-21 11:16:15,746 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=57 2023-07-21 11:16:15,746 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=57, state=SUCCESS; CloseRegionProcedure 2265ca53ec03c749164409bc942b21d8, server=jenkins-hbase17.apache.org,40467,1689938170241 in 419 msec 2023-07-21 11:16:15,747 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5ece16465b7ba9526d3620e0482ced3c, UNASSIGN in 504 msec 2023-07-21 11:16:15,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:15,750 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9. 2023-07-21 11:16:15,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1ba6fb1c8b9ca3f6d638c6d25372eab9: 2023-07-21 11:16:15,751 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=2265ca53ec03c749164409bc942b21d8, UNASSIGN in 507 msec 2023-07-21 11:16:15,753 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:15,754 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=1ba6fb1c8b9ca3f6d638c6d25372eab9, regionState=CLOSED 2023-07-21 11:16:15,754 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938175754"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938175754"}]},"ts":"1689938175754"} 2023-07-21 11:16:15,760 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=49 2023-07-21 11:16:15,760 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=49, state=SUCCESS; CloseRegionProcedure 1ba6fb1c8b9ca3f6d638c6d25372eab9, server=jenkins-hbase17.apache.org,40783,1689938159262 in 504 msec 2023-07-21 11:16:15,764 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=48 2023-07-21 11:16:15,764 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1ba6fb1c8b9ca3f6d638c6d25372eab9, UNASSIGN in 524 msec 2023-07-21 11:16:15,765 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938175765"}]},"ts":"1689938175765"} 2023-07-21 11:16:15,767 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLED in hbase:meta 2023-07-21 11:16:15,768 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testCreateMultiRegion to state=DISABLED 2023-07-21 11:16:15,771 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion in 554 msec 2023-07-21 11:16:15,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=48 2023-07-21 11:16:15,830 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateMultiRegion, procId: 48 completed 2023-07-21 11:16:15,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testCreateMultiRegion 2023-07-21 11:16:15,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=69, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:15,841 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=69, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:15,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateMultiRegion' from rsgroup 'default' 2023-07-21 11:16:15,843 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=69, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:15,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:15,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:15,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:15,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-21 11:16:15,867 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:15,867 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:15,867 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:15,867 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300 2023-07-21 11:16:15,867 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:15,867 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:15,867 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:15,867 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:15,881 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40/recovered.edits] 2023-07-21 11:16:15,883 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9/recovered.edits] 2023-07-21 11:16:15,884 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9/recovered.edits] 2023-07-21 11:16:15,884 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df/recovered.edits] 2023-07-21 11:16:15,884 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c/recovered.edits] 2023-07-21 11:16:15,888 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537/recovered.edits] 2023-07-21 11:16:15,889 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300/recovered.edits] 2023-07-21 11:16:15,889 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c/recovered.edits] 2023-07-21 11:16:15,914 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40/recovered.edits/4.seqid 2023-07-21 11:16:15,921 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/8160c5907f44514700ae33cb307e3f40 2023-07-21 11:16:15,921 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:15,928 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9/recovered.edits/4.seqid 2023-07-21 11:16:15,936 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300/recovered.edits/4.seqid 2023-07-21 11:16:15,938 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/afc016f3656b887f1d07954a61494300 2023-07-21 11:16:15,938 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:15,942 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/9f0870d9333c22090af6906d223e01e9 2023-07-21 11:16:15,943 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537/recovered.edits/4.seqid 2023-07-21 11:16:15,944 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c/recovered.edits/4.seqid 2023-07-21 11:16:15,947 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df/recovered.edits/4.seqid 2023-07-21 11:16:15,948 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/cdf1f347d5b7f7314366b50840c18537 2023-07-21 11:16:15,949 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da028d0dd3b64c4dfc6569fd0d999e6c 2023-07-21 11:16:15,950 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8/recovered.edits] 2023-07-21 11:16:15,951 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/da0dcd8a3e03226381a32dee47d688df 2023-07-21 11:16:15,952 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9/recovered.edits/4.seqid 2023-07-21 11:16:15,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-21 11:16:15,955 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c/recovered.edits/4.seqid 2023-07-21 11:16:15,955 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e/recovered.edits] 2023-07-21 11:16:15,955 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/1ba6fb1c8b9ca3f6d638c6d25372eab9 2023-07-21 11:16:15,957 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/5ece16465b7ba9526d3620e0482ced3c 2023-07-21 11:16:15,965 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8/recovered.edits/4.seqid 2023-07-21 11:16:15,968 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/2265ca53ec03c749164409bc942b21d8 2023-07-21 11:16:15,968 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e/recovered.edits/4.seqid 2023-07-21 11:16:15,969 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateMultiRegion/310a0e12e8c78eed458f01b87724c89e 2023-07-21 11:16:15,970 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-21 11:16:15,975 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=69, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:15,982 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 10 rows of Group_testCreateMultiRegion from hbase:meta 2023-07-21 11:16:15,990 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateMultiRegion' descriptor. 2023-07-21 11:16:15,992 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=69, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:15,992 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateMultiRegion' from region states. 2023-07-21 11:16:15,993 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938175992"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:15,993 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689938172895.8160c5907f44514700ae33cb307e3f40.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938175992"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:15,993 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938175992"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:15,993 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938175992"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:15,993 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938175992"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:15,993 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689938172895.afc016f3656b887f1d07954a61494300.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938175992"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:15,994 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938175992"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:15,994 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938175992"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:15,994 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689938172895.2265ca53ec03c749164409bc942b21d8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938175992"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:15,994 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938175992"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:15,998 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 10 regions from META 2023-07-21 11:16:15,998 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1ba6fb1c8b9ca3f6d638c6d25372eab9, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689938172895.1ba6fb1c8b9ca3f6d638c6d25372eab9.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, {ENCODED => 8160c5907f44514700ae33cb307e3f40, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1689938172895.8160c5907f44514700ae33cb307e3f40.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, {ENCODED => cdf1f347d5b7f7314366b50840c18537, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1689938172895.cdf1f347d5b7f7314366b50840c18537.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, {ENCODED => da028d0dd3b64c4dfc6569fd0d999e6c, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1689938172895.da028d0dd3b64c4dfc6569fd0d999e6c.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, {ENCODED => 5ece16465b7ba9526d3620e0482ced3c, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689938172895.5ece16465b7ba9526d3620e0482ced3c.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, {ENCODED => afc016f3656b887f1d07954a61494300, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689938172895.afc016f3656b887f1d07954a61494300.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, {ENCODED => da0dcd8a3e03226381a32dee47d688df, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689938172895.da0dcd8a3e03226381a32dee47d688df.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, {ENCODED => 9f0870d9333c22090af6906d223e01e9, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689938172895.9f0870d9333c22090af6906d223e01e9.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, {ENCODED => 2265ca53ec03c749164409bc942b21d8, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689938172895.2265ca53ec03c749164409bc942b21d8.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, {ENCODED => 310a0e12e8c78eed458f01b87724c89e, NAME => 'Group_testCreateMultiRegion,,1689938172895.310a0e12e8c78eed458f01b87724c89e.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}] 2023-07-21 11:16:15,998 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateMultiRegion' as deleted. 2023-07-21 11:16:15,999 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938175998"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:16,002 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateMultiRegion state from META 2023-07-21 11:16:16,013 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=69, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:16,014 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion in 182 msec 2023-07-21 11:16:16,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-21 11:16:16,156 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateMultiRegion, procId: 69 completed 2023-07-21 11:16:16,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:16,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:16,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:16,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:16,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:16,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:16,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:16,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:16,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:16,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:16,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:16,223 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:16,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:16,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:16,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:16,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:16,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:16,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:16,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:16,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:16,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:16,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 251 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939376277, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:16,281 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:16,283 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:16,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:16,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:16,286 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:16,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:16,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:16,330 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=503 (was 487) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2108709732_17 at /127.0.0.1:32866 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4543071c-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2b8fd83-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1888201507_17 at /127.0.0.1:51944 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1760670747_17 at /127.0.0.1:39944 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=790 (was 759) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=825 (was 810) - SystemLoadAverage LEAK? -, ProcessCount=186 (was 186), AvailableMemoryMB=2854 (was 2019) - AvailableMemoryMB LEAK? - 2023-07-21 11:16:16,330 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-21 11:16:16,370 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=503, OpenFileDescriptor=790, MaxFileDescriptor=60000, SystemLoadAverage=825, ProcessCount=186, AvailableMemoryMB=2847 2023-07-21 11:16:16,370 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-21 11:16:16,370 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(132): testNamespaceCreateAndAssign 2023-07-21 11:16:16,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:16,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:16,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:16,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:16,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:16,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:16,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:16,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:16,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:16,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:16,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:16,421 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:16,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:16,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:16,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:16,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:16,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:16,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:16,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:16,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:16,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:16,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 279 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939376443, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:16,444 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:16,446 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:16,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:16,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:16,453 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:16,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:16,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:16,455 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBasics(118): testNamespaceCreateAndAssign 2023-07-21 11:16:16,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:16,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:16,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup appInfo 2023-07-21 11:16:16,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:16,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:16,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 11:16:16,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:16,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:16,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:16,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:16,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37137] to rsgroup appInfo 2023-07-21 11:16:16,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:16,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:16,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 11:16:16,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:16,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(238): Moving server region 2782e41606006289532e239f665ea4eb, which do not belong to RSGroup appInfo 2023-07-21 11:16:16,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:16,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:16,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:16,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:16,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:16,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=70, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE 2023-07-21 11:16:16,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup appInfo 2023-07-21 11:16:16,506 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE 2023-07-21 11:16:16,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:16,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:16,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:16,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:16,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:16,508 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:16,508 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938176508"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938176508"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938176508"}]},"ts":"1689938176508"} 2023-07-21 11:16:16,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 11:16:16,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-21 11:16:16,510 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 11:16:16,511 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=70, state=RUNNABLE; CloseRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:16,512 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,37137,1689938164928, state=CLOSING 2023-07-21 11:16:16,513 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:16,513 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:16,514 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=71, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:16,672 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-21 11:16:16,672 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:16,673 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:16:16,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2782e41606006289532e239f665ea4eb, disabling compactions & flushes 2023-07-21 11:16:16,673 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:16:16,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:16,673 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:16:16,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:16,673 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:16:16,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. after waiting 0 ms 2023-07-21 11:16:16,673 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:16:16,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:16,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 2782e41606006289532e239f665ea4eb 1/1 column families, dataSize=5.73 KB heapSize=9.42 KB 2023-07-21 11:16:16,673 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=30.14 KB heapSize=48.20 KB 2023-07-21 11:16:16,770 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.73 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/0fb9bf38ccef403bbe61f4b8544ca472 2023-07-21 11:16:16,780 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0fb9bf38ccef403bbe61f4b8544ca472 2023-07-21 11:16:16,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/0fb9bf38ccef403bbe61f4b8544ca472 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/0fb9bf38ccef403bbe61f4b8544ca472 2023-07-21 11:16:16,804 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0fb9bf38ccef403bbe61f4b8544ca472 2023-07-21 11:16:16,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/0fb9bf38ccef403bbe61f4b8544ca472, entries=10, sequenceid=37, filesize=5.4 K 2023-07-21 11:16:16,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.73 KB/5872, heapSize ~9.41 KB/9632, currentSize=0 B/0 for 2782e41606006289532e239f665ea4eb in 136ms, sequenceid=37, compaction requested=false 2023-07-21 11:16:16,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=12 2023-07-21 11:16:16,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:16,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:16,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:16,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 2782e41606006289532e239f665ea4eb move to jenkins-hbase17.apache.org,40467,1689938170241 record at close sequenceid=37 2023-07-21 11:16:16,834 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=72, ppid=70, state=RUNNABLE; CloseRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:16,834 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:17,145 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.26 KB at sequenceid=85 (bloomFilter=false), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/b65be13c0dc640f9a57e3a19398ea4b9 2023-07-21 11:16:17,155 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b65be13c0dc640f9a57e3a19398ea4b9 2023-07-21 11:16:17,184 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.19 KB at sequenceid=85 (bloomFilter=false), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/rep_barrier/f8e5cb731248424f9ac24182335eb922 2023-07-21 11:16:17,202 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f8e5cb731248424f9ac24182335eb922 2023-07-21 11:16:17,448 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.69 KB at sequenceid=85 (bloomFilter=false), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/table/53441bb4613b4a9e8e92ee74f2b2633b 2023-07-21 11:16:17,460 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 53441bb4613b4a9e8e92ee74f2b2633b 2023-07-21 11:16:17,463 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/b65be13c0dc640f9a57e3a19398ea4b9 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/b65be13c0dc640f9a57e3a19398ea4b9 2023-07-21 11:16:17,475 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b65be13c0dc640f9a57e3a19398ea4b9 2023-07-21 11:16:17,476 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/b65be13c0dc640f9a57e3a19398ea4b9, entries=33, sequenceid=85, filesize=8.4 K 2023-07-21 11:16:17,479 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/rep_barrier/f8e5cb731248424f9ac24182335eb922 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier/f8e5cb731248424f9ac24182335eb922 2023-07-21 11:16:17,490 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f8e5cb731248424f9ac24182335eb922 2023-07-21 11:16:17,490 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier/f8e5cb731248424f9ac24182335eb922, entries=11, sequenceid=85, filesize=6.1 K 2023-07-21 11:16:17,492 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/table/53441bb4613b4a9e8e92ee74f2b2633b as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/53441bb4613b4a9e8e92ee74f2b2633b 2023-07-21 11:16:17,503 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 53441bb4613b4a9e8e92ee74f2b2633b 2023-07-21 11:16:17,503 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/53441bb4613b4a9e8e92ee74f2b2633b, entries=13, sequenceid=85, filesize=6.1 K 2023-07-21 11:16:17,509 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~30.14 KB/30861, heapSize ~48.15 KB/49304, currentSize=0 B/0 for 1588230740 in 836ms, sequenceid=85, compaction requested=false 2023-07-21 11:16:17,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure.ProcedureSyncWait(216): waitFor pid=70 2023-07-21 11:16:17,572 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=18 2023-07-21 11:16:17,575 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:17,576 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:16:17,576 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:16:17,576 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase17.apache.org,39805,1689938159444 record at close sequenceid=85 2023-07-21 11:16:17,581 WARN [PEWorker-3] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-21 11:16:17,582 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-21 11:16:17,585 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=71 2023-07-21 11:16:17,585 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=71, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,37137,1689938164928 in 1.0670 sec 2023-07-21 11:16:17,586 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=71, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,39805,1689938159444; forceNewPlan=false, retain=false 2023-07-21 11:16:17,737 INFO [jenkins-hbase17:41077] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:16:17,737 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,39805,1689938159444, state=OPENING 2023-07-21 11:16:17,793 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=71, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,39805,1689938159444}] 2023-07-21 11:16:17,793 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:17,793 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:17,970 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 11:16:17,970 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:17,975 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C39805%2C1689938159444.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,39805,1689938159444, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:17,996 DEBUG [RS-EventLoopGroup-8-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:18,017 DEBUG [RS-EventLoopGroup-8-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:18,018 DEBUG [RS-EventLoopGroup-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:18,027 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,39805,1689938159444/jenkins-hbase17.apache.org%2C39805%2C1689938159444.meta.1689938177976.meta 2023-07-21 11:16:18,028 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK]] 2023-07-21 11:16:18,028 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:18,029 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:16:18,029 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 11:16:18,029 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 11:16:18,029 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 11:16:18,029 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:18,029 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 11:16:18,029 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 11:16:18,044 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:16:18,046 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:18,046 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:18,046 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:16:18,085 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/728cc4f1540e47f282a8d3cbd08b0853 2023-07-21 11:16:18,094 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b65be13c0dc640f9a57e3a19398ea4b9 2023-07-21 11:16:18,094 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/b65be13c0dc640f9a57e3a19398ea4b9 2023-07-21 11:16:18,094 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:18,094 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:16:18,095 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:18,095 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:18,096 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:16:18,105 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f8e5cb731248424f9ac24182335eb922 2023-07-21 11:16:18,106 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier/f8e5cb731248424f9ac24182335eb922 2023-07-21 11:16:18,106 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:18,106 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:16:18,114 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:18,114 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:18,114 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:16:18,128 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/47ab354a4780423db7f93e81451f82da 2023-07-21 11:16:18,137 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 53441bb4613b4a9e8e92ee74f2b2633b 2023-07-21 11:16:18,138 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/53441bb4613b4a9e8e92ee74f2b2633b 2023-07-21 11:16:18,138 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:18,139 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:18,141 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:18,145 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:16:18,146 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:16:18,147 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11419368960, jitterRate=0.06351161003112793}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:16:18,148 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:16:18,150 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=74, masterSystemTime=1689938177948 2023-07-21 11:16:18,154 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 11:16:18,154 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 11:16:18,155 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,39805,1689938159444, state=OPEN 2023-07-21 11:16:18,156 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:18,156 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:18,157 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=CLOSED 2023-07-21 11:16:18,157 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938178157"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938178157"}]},"ts":"1689938178157"} 2023-07-21 11:16:18,158 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37137] ipc.CallRunner(144): callId: 182 service: ClientService methodName: Mutate size: 214 connection: 136.243.18.41:47052 deadline: 1689938238158, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=39805 startCode=1689938159444. As of locationSeqNum=85. 2023-07-21 11:16:18,160 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=71 2023-07-21 11:16:18,160 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=71, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,39805,1689938159444 in 364 msec 2023-07-21 11:16:18,164 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 1.6530 sec 2023-07-21 11:16:18,260 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:18,262 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:33808, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:18,269 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=70 2023-07-21 11:16:18,269 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=70, state=SUCCESS; CloseRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,37137,1689938164928 in 1.7530 sec 2023-07-21 11:16:18,271 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=70, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,40467,1689938170241; forceNewPlan=false, retain=false 2023-07-21 11:16:18,421 INFO [jenkins-hbase17:41077] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:16:18,422 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:18,422 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938178421"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938178421"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938178421"}]},"ts":"1689938178421"} 2023-07-21 11:16:18,431 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=70, state=RUNNABLE; OpenRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,40467,1689938170241}] 2023-07-21 11:16:18,591 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:18,591 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2782e41606006289532e239f665ea4eb, NAME => 'hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:18,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:16:18,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. service=MultiRowMutationService 2023-07-21 11:16:18,592 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 11:16:18,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:18,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:18,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:18,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:18,594 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:18,595 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m 2023-07-21 11:16:18,595 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m 2023-07-21 11:16:18,595 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2782e41606006289532e239f665ea4eb columnFamilyName m 2023-07-21 11:16:18,605 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0fb9bf38ccef403bbe61f4b8544ca472 2023-07-21 11:16:18,605 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/0fb9bf38ccef403bbe61f4b8544ca472 2023-07-21 11:16:18,611 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/14fcb2495f27487ba67ba2d3cfa299f7 2023-07-21 11:16:18,611 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(310): Store=2782e41606006289532e239f665ea4eb/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:18,612 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:18,613 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:18,616 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:18,617 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2782e41606006289532e239f665ea4eb; next sequenceid=41; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5043b208, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:18,618 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:18,619 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb., pid=75, masterSystemTime=1689938178584 2023-07-21 11:16:18,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:18,620 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:18,621 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=OPEN, openSeqNum=41, regionLocation=jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:18,621 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938178621"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938178621"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938178621"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938178621"}]},"ts":"1689938178621"} 2023-07-21 11:16:18,625 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=70 2023-07-21 11:16:18,626 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=70, state=SUCCESS; OpenRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,40467,1689938170241 in 191 msec 2023-07-21 11:16:18,627 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=70, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE in 2.1250 sec 2023-07-21 11:16:19,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37137,1689938164928] are moved back to default 2023-07-21 11:16:19,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-21 11:16:19,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:19,512 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37137] ipc.CallRunner(144): callId: 14 service: ClientService methodName: Scan size: 136 connection: 136.243.18.41:47058 deadline: 1689938239512, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=40467 startCode=1689938170241. As of locationSeqNum=37. 2023-07-21 11:16:19,614 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37137] ipc.CallRunner(144): callId: 15 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:47058 deadline: 1689938239614, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=39805 startCode=1689938159444. As of locationSeqNum=85. 2023-07-21 11:16:19,715 DEBUG [hconnection-0x4543071c-shared-pool-10] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:19,718 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:33824, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:19,723 DEBUG [hconnection-0x4543071c-shared-pool-10] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:19,726 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:59078, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:19,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:19,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:19,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=appInfo 2023-07-21 11:16:19,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:19,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$15(3014): Client=jenkins//136.243.18.41 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'appInfo'} 2023-07-21 11:16:19,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=76, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-21 11:16:19,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=76 2023-07-21 11:16:19,780 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:16:19,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 29 msec 2023-07-21 11:16:19,841 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:16:19,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=76 2023-07-21 11:16:19,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:19,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:19,882 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=77, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:16:19,883 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37137] ipc.CallRunner(144): callId: 189 service: ClientService methodName: ExecService size: 542 connection: 136.243.18.41:47052 deadline: 1689938239883, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=40467 startCode=1689938170241. As of locationSeqNum=37. 2023-07-21 11:16:19,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "Group_foo" qualifier: "Group_testCreateAndAssign" procId is: 77 2023-07-21 11:16:19,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-21 11:16:19,986 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:19,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-21 11:16:19,987 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:59080, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:19,991 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:19,992 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:19,993 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 11:16:19,993 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:19,996 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=77, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:16:20,002 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:20,003 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab empty. 2023-07-21 11:16:20,004 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:20,004 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-21 11:16:20,040 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_foo/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-21 11:16:20,041 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => 19380a2a5ae6802d9672fd92766295ab, NAME => 'Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:20,082 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:20,082 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing 19380a2a5ae6802d9672fd92766295ab, disabling compactions & flushes 2023-07-21 11:16:20,082 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. 2023-07-21 11:16:20,082 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. 2023-07-21 11:16:20,083 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. after waiting 0 ms 2023-07-21 11:16:20,083 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. 2023-07-21 11:16:20,083 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. 2023-07-21 11:16:20,083 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for 19380a2a5ae6802d9672fd92766295ab: 2023-07-21 11:16:20,085 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=77, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:16:20,086 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689938180086"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938180086"}]},"ts":"1689938180086"} 2023-07-21 11:16:20,089 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:16:20,091 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=77, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:16:20,091 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938180091"}]},"ts":"1689938180091"} 2023-07-21 11:16:20,093 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-21 11:16:20,102 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=77, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=19380a2a5ae6802d9672fd92766295ab, ASSIGN}] 2023-07-21 11:16:20,105 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, ppid=77, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=19380a2a5ae6802d9672fd92766295ab, ASSIGN 2023-07-21 11:16:20,107 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=78, ppid=77, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=19380a2a5ae6802d9672fd92766295ab, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,37137,1689938164928; forceNewPlan=false, retain=false 2023-07-21 11:16:20,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-21 11:16:20,259 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=19380a2a5ae6802d9672fd92766295ab, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:20,259 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689938180259"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938180259"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938180259"}]},"ts":"1689938180259"} 2023-07-21 11:16:20,267 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; OpenRegionProcedure 19380a2a5ae6802d9672fd92766295ab, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:20,425 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. 2023-07-21 11:16:20,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 19380a2a5ae6802d9672fd92766295ab, NAME => 'Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:20,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign 19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:20,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:20,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:20,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:20,436 INFO [StoreOpener-19380a2a5ae6802d9672fd92766295ab-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:20,438 DEBUG [StoreOpener-19380a2a5ae6802d9672fd92766295ab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab/f 2023-07-21 11:16:20,438 DEBUG [StoreOpener-19380a2a5ae6802d9672fd92766295ab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab/f 2023-07-21 11:16:20,439 INFO [StoreOpener-19380a2a5ae6802d9672fd92766295ab-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 19380a2a5ae6802d9672fd92766295ab columnFamilyName f 2023-07-21 11:16:20,439 INFO [StoreOpener-19380a2a5ae6802d9672fd92766295ab-1] regionserver.HStore(310): Store=19380a2a5ae6802d9672fd92766295ab/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:20,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:20,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:20,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:20,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:20,450 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 19380a2a5ae6802d9672fd92766295ab; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11307107840, jitterRate=0.05305647850036621}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:20,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 19380a2a5ae6802d9672fd92766295ab: 2023-07-21 11:16:20,456 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab., pid=79, masterSystemTime=1689938180420 2023-07-21 11:16:20,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. 2023-07-21 11:16:20,459 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. 2023-07-21 11:16:20,459 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=19380a2a5ae6802d9672fd92766295ab, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:20,460 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689938180459"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938180459"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938180459"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938180459"}]},"ts":"1689938180459"} 2023-07-21 11:16:20,474 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-21 11:16:20,474 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; OpenRegionProcedure 19380a2a5ae6802d9672fd92766295ab, server=jenkins-hbase17.apache.org,37137,1689938164928 in 201 msec 2023-07-21 11:16:20,487 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=77 2023-07-21 11:16:20,488 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=77, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=19380a2a5ae6802d9672fd92766295ab, ASSIGN in 372 msec 2023-07-21 11:16:20,489 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=77, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:16:20,489 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938180489"}]},"ts":"1689938180489"} 2023-07-21 11:16:20,491 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-21 11:16:20,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-21 11:16:20,493 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=77, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:16:20,503 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign in 618 msec 2023-07-21 11:16:20,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-21 11:16:20,995 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 77 completed 2023-07-21 11:16:20,995 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:21,000 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$15(890): Started disable of Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:21,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:21,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:21,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-21 11:16:21,012 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938181012"}]},"ts":"1689938181012"} 2023-07-21 11:16:21,014 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-21 11:16:21,015 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_foo:Group_testCreateAndAssign to state=DISABLING 2023-07-21 11:16:21,016 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=81, ppid=80, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=19380a2a5ae6802d9672fd92766295ab, UNASSIGN}] 2023-07-21 11:16:21,019 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, ppid=80, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=19380a2a5ae6802d9672fd92766295ab, UNASSIGN 2023-07-21 11:16:21,020 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=19380a2a5ae6802d9672fd92766295ab, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:21,020 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689938181020"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938181020"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938181020"}]},"ts":"1689938181020"} 2023-07-21 11:16:21,022 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure 19380a2a5ae6802d9672fd92766295ab, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:21,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-21 11:16:21,176 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:21,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 19380a2a5ae6802d9672fd92766295ab, disabling compactions & flushes 2023-07-21 11:16:21,178 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. 2023-07-21 11:16:21,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. 2023-07-21 11:16:21,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. after waiting 0 ms 2023-07-21 11:16:21,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. 2023-07-21 11:16:21,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:21,183 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab. 2023-07-21 11:16:21,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 19380a2a5ae6802d9672fd92766295ab: 2023-07-21 11:16:21,185 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:21,185 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=19380a2a5ae6802d9672fd92766295ab, regionState=CLOSED 2023-07-21 11:16:21,185 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689938181185"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938181185"}]},"ts":"1689938181185"} 2023-07-21 11:16:21,189 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-21 11:16:21,189 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure 19380a2a5ae6802d9672fd92766295ab, server=jenkins-hbase17.apache.org,37137,1689938164928 in 165 msec 2023-07-21 11:16:21,191 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=81, resume processing ppid=80 2023-07-21 11:16:21,191 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, ppid=80, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=19380a2a5ae6802d9672fd92766295ab, UNASSIGN in 173 msec 2023-07-21 11:16:21,192 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938181192"}]},"ts":"1689938181192"} 2023-07-21 11:16:21,193 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-21 11:16:21,196 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_foo:Group_testCreateAndAssign to state=DISABLED 2023-07-21 11:16:21,198 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign in 196 msec 2023-07-21 11:16:21,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-21 11:16:21,315 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 80 completed 2023-07-21 11:16:21,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:21,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=83, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:21,323 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=83, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:21,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_foo:Group_testCreateAndAssign' from rsgroup 'appInfo' 2023-07-21 11:16:21,325 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=83, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:21,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:21,330 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:21,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:21,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 11:16:21,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:21,333 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab/recovered.edits] 2023-07-21 11:16:21,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-21 11:16:21,342 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab/recovered.edits/4.seqid 2023-07-21 11:16:21,344 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_foo/Group_testCreateAndAssign/19380a2a5ae6802d9672fd92766295ab 2023-07-21 11:16:21,344 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-21 11:16:21,348 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=83, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:21,351 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_foo:Group_testCreateAndAssign from hbase:meta 2023-07-21 11:16:21,355 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_foo:Group_testCreateAndAssign' descriptor. 2023-07-21 11:16:21,357 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=83, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:21,357 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_foo:Group_testCreateAndAssign' from region states. 2023-07-21 11:16:21,357 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938181357"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:21,360 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 11:16:21,360 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 19380a2a5ae6802d9672fd92766295ab, NAME => 'Group_foo:Group_testCreateAndAssign,,1689938179876.19380a2a5ae6802d9672fd92766295ab.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 11:16:21,360 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_foo:Group_testCreateAndAssign' as deleted. 2023-07-21 11:16:21,360 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938181360"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:21,365 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_foo:Group_testCreateAndAssign state from META 2023-07-21 11:16:21,367 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=83, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:21,368 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=83, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign in 51 msec 2023-07-21 11:16:21,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-21 11:16:21,437 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 83 completed 2023-07-21 11:16:21,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$17(3086): Client=jenkins//136.243.18.41 delete Group_foo 2023-07-21 11:16:21,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:16:21,462 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=84, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:16:21,466 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=84, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:16:21,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-21 11:16:21,472 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=84, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:16:21,472 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 11:16:21,473 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:16:21,474 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=84, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:16:21,477 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=84, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:16:21,478 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 27 msec 2023-07-21 11:16:21,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-21 11:16:21,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:21,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:21,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:21,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:21,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:21,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:21,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:21,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:21,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:21,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 11:16:21,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 11:16:21,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:21,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:21,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:21,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:21,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37137] to rsgroup default 2023-07-21 11:16:21,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:21,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 11:16:21,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:21,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-21 11:16:21,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37137,1689938164928] are moved back to appInfo 2023-07-21 11:16:21,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-21 11:16:21,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:21,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup appInfo 2023-07-21 11:16:21,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:21,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:21,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:21,694 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:21,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:21,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:21,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:21,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:21,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:21,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:21,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:21,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:21,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:21,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939381729, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:21,730 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:21,732 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:21,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:21,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:21,734 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:21,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:21,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:21,763 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=522 (was 503) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_87240163_17 at /127.0.0.1:32866 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741863_1039, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1927243827_17 at /127.0.0.1:39998 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1927243827_17 at /127.0.0.1:41276 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2108709732_17 at /127.0.0.1:50728 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1927243827_17 at /127.0.0.1:41280 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_87240163_17 at /127.0.0.1:41268 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_87240163_17 at /127.0.0.1:51944 [Waiting for operation #13] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1888201507_17 at /127.0.0.1:44950 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741863_1039] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741863_1039, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4543071c-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_87240163_17 at /127.0.0.1:39944 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_87240163_17 at /127.0.0.1:50712 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1927243827_17 at /127.0.0.1:39988 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1888201507_17 at /127.0.0.1:50710 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741863_1039] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1888201507_17 at /127.0.0.1:41258 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741863_1039] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-21 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-22 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4543071c-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741863_1039, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae-prefix:jenkins-hbase17.apache.org,39805,1689938159444.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=819 (was 790) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=839 (was 825) - SystemLoadAverage LEAK? -, ProcessCount=185 (was 186), AvailableMemoryMB=3663 (was 2847) - AvailableMemoryMB LEAK? - 2023-07-21 11:16:21,763 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-21 11:16:21,786 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=522, OpenFileDescriptor=819, MaxFileDescriptor=60000, SystemLoadAverage=839, ProcessCount=186, AvailableMemoryMB=3658 2023-07-21 11:16:21,786 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-21 11:16:21,786 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(132): testCreateAndDrop 2023-07-21 11:16:21,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:21,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:21,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:21,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:21,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:21,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:21,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:21,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:21,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:21,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:21,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:21,810 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:21,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:21,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:21,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:21,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:21,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:21,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:21,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:21,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:21,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:21,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 398 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939381832, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:21,835 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:21,836 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:21,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:21,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:21,844 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:21,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:21,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:21,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:21,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=85, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:21,861 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:16:21,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndDrop" procId is: 85 2023-07-21 11:16:21,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-21 11:16:21,866 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:21,872 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:21,873 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:21,875 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:16:21,882 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:21,883 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92 empty. 2023-07-21 11:16:21,883 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:21,883 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-21 11:16:21,920 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 11:16:21,921 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => c596215d2e7de8ecd45a7ecc52e6ec92, NAME => 'Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:21,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-21 11:16:21,995 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:21,996 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1604): Closing c596215d2e7de8ecd45a7ecc52e6ec92, disabling compactions & flushes 2023-07-21 11:16:21,996 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. 2023-07-21 11:16:21,996 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. 2023-07-21 11:16:21,996 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. after waiting 0 ms 2023-07-21 11:16:21,996 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. 2023-07-21 11:16:21,996 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. 2023-07-21 11:16:21,996 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for c596215d2e7de8ecd45a7ecc52e6ec92: 2023-07-21 11:16:22,005 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:16:22,007 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689938182006"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938182006"}]},"ts":"1689938182006"} 2023-07-21 11:16:22,017 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:16:22,019 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:16:22,020 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938182019"}]},"ts":"1689938182019"} 2023-07-21 11:16:22,021 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLING in hbase:meta 2023-07-21 11:16:22,026 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:22,026 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:22,026 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:22,026 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:22,026 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 11:16:22,026 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:22,026 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=c596215d2e7de8ecd45a7ecc52e6ec92, ASSIGN}] 2023-07-21 11:16:22,031 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=c596215d2e7de8ecd45a7ecc52e6ec92, ASSIGN 2023-07-21 11:16:22,033 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=c596215d2e7de8ecd45a7ecc52e6ec92, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,39805,1689938159444; forceNewPlan=false, retain=false 2023-07-21 11:16:22,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-21 11:16:22,183 INFO [jenkins-hbase17:41077] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:16:22,185 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=86 updating hbase:meta row=c596215d2e7de8ecd45a7ecc52e6ec92, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:22,185 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689938182185"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938182185"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938182185"}]},"ts":"1689938182185"} 2023-07-21 11:16:22,191 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=87, ppid=86, state=RUNNABLE; OpenRegionProcedure c596215d2e7de8ecd45a7ecc52e6ec92, server=jenkins-hbase17.apache.org,39805,1689938159444}] 2023-07-21 11:16:22,356 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. 2023-07-21 11:16:22,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c596215d2e7de8ecd45a7ecc52e6ec92, NAME => 'Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:22,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndDrop c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:22,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:22,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:22,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:22,364 INFO [StoreOpener-c596215d2e7de8ecd45a7ecc52e6ec92-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:22,368 DEBUG [StoreOpener-c596215d2e7de8ecd45a7ecc52e6ec92-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92/cf 2023-07-21 11:16:22,368 DEBUG [StoreOpener-c596215d2e7de8ecd45a7ecc52e6ec92-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92/cf 2023-07-21 11:16:22,373 INFO [StoreOpener-c596215d2e7de8ecd45a7ecc52e6ec92-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c596215d2e7de8ecd45a7ecc52e6ec92 columnFamilyName cf 2023-07-21 11:16:22,374 INFO [StoreOpener-c596215d2e7de8ecd45a7ecc52e6ec92-1] regionserver.HStore(310): Store=c596215d2e7de8ecd45a7ecc52e6ec92/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:22,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:22,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:22,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:22,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:22,414 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened c596215d2e7de8ecd45a7ecc52e6ec92; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9549027040, jitterRate=-0.1106775552034378}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:22,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for c596215d2e7de8ecd45a7ecc52e6ec92: 2023-07-21 11:16:22,416 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92., pid=87, masterSystemTime=1689938182347 2023-07-21 11:16:22,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. 2023-07-21 11:16:22,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. 2023-07-21 11:16:22,424 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=86 updating hbase:meta row=c596215d2e7de8ecd45a7ecc52e6ec92, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:22,424 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689938182424"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938182424"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938182424"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938182424"}]},"ts":"1689938182424"} 2023-07-21 11:16:22,437 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=87, resume processing ppid=86 2023-07-21 11:16:22,437 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=87, ppid=86, state=SUCCESS; OpenRegionProcedure c596215d2e7de8ecd45a7ecc52e6ec92, server=jenkins-hbase17.apache.org,39805,1689938159444 in 246 msec 2023-07-21 11:16:22,456 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-21 11:16:22,456 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=c596215d2e7de8ecd45a7ecc52e6ec92, ASSIGN in 411 msec 2023-07-21 11:16:22,461 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:16:22,461 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938182461"}]},"ts":"1689938182461"} 2023-07-21 11:16:22,469 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLED in hbase:meta 2023-07-21 11:16:22,476 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:16:22,479 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=85, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop in 629 msec 2023-07-21 11:16:22,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-21 11:16:22,484 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndDrop, procId: 85 completed 2023-07-21 11:16:22,485 DEBUG [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateAndDrop get assigned. Timeout = 60000ms 2023-07-21 11:16:22,485 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:22,486 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37137] ipc.CallRunner(144): callId: 413 service: ClientService methodName: Scan size: 96 connection: 136.243.18.41:47086 deadline: 1689938242486, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=39805 startCode=1689938159444. As of locationSeqNum=85. 2023-07-21 11:16:22,589 DEBUG [hconnection-0x2b8fd83-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:22,592 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:33840, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:22,603 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateAndDrop assigned to meta. Checking AM states. 2023-07-21 11:16:22,604 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:22,604 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateAndDrop assigned. 2023-07-21 11:16:22,604 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:22,612 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndDrop 2023-07-21 11:16:22,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testCreateAndDrop 2023-07-21 11:16:22,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:22,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-21 11:16:22,630 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938182629"}]},"ts":"1689938182629"} 2023-07-21 11:16:22,637 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLING in hbase:meta 2023-07-21 11:16:22,652 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCreateAndDrop to state=DISABLING 2023-07-21 11:16:22,653 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=c596215d2e7de8ecd45a7ecc52e6ec92, UNASSIGN}] 2023-07-21 11:16:22,655 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=c596215d2e7de8ecd45a7ecc52e6ec92, UNASSIGN 2023-07-21 11:16:22,656 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=c596215d2e7de8ecd45a7ecc52e6ec92, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:22,657 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689938182656"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938182656"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938182656"}]},"ts":"1689938182656"} 2023-07-21 11:16:22,659 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE; CloseRegionProcedure c596215d2e7de8ecd45a7ecc52e6ec92, server=jenkins-hbase17.apache.org,39805,1689938159444}] 2023-07-21 11:16:22,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-21 11:16:22,788 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 11:16:22,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:22,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing c596215d2e7de8ecd45a7ecc52e6ec92, disabling compactions & flushes 2023-07-21 11:16:22,813 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. 2023-07-21 11:16:22,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. 2023-07-21 11:16:22,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. after waiting 0 ms 2023-07-21 11:16:22,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. 2023-07-21 11:16:22,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:22,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92. 2023-07-21 11:16:22,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for c596215d2e7de8ecd45a7ecc52e6ec92: 2023-07-21 11:16:22,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:22,822 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=c596215d2e7de8ecd45a7ecc52e6ec92, regionState=CLOSED 2023-07-21 11:16:22,823 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689938182822"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938182822"}]},"ts":"1689938182822"} 2023-07-21 11:16:22,825 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-21 11:16:22,825 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; CloseRegionProcedure c596215d2e7de8ecd45a7ecc52e6ec92, server=jenkins-hbase17.apache.org,39805,1689938159444 in 166 msec 2023-07-21 11:16:22,826 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-21 11:16:22,826 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=c596215d2e7de8ecd45a7ecc52e6ec92, UNASSIGN in 172 msec 2023-07-21 11:16:22,827 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938182827"}]},"ts":"1689938182827"} 2023-07-21 11:16:22,828 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLED in hbase:meta 2023-07-21 11:16:22,829 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCreateAndDrop to state=DISABLED 2023-07-21 11:16:22,831 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop in 218 msec 2023-07-21 11:16:22,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-21 11:16:22,920 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndDrop, procId: 88 completed 2023-07-21 11:16:22,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testCreateAndDrop 2023-07-21 11:16:22,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:22,925 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=91, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:22,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndDrop' from rsgroup 'default' 2023-07-21 11:16:22,926 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=91, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:22,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:22,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:22,931 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:22,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:22,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-21 11:16:22,939 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92/cf, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92/recovered.edits] 2023-07-21 11:16:22,945 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92/recovered.edits/4.seqid 2023-07-21 11:16:22,945 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCreateAndDrop/c596215d2e7de8ecd45a7ecc52e6ec92 2023-07-21 11:16:22,945 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-21 11:16:22,950 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=91, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:22,956 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndDrop from hbase:meta 2023-07-21 11:16:22,958 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndDrop' descriptor. 2023-07-21 11:16:22,959 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=91, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:22,959 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndDrop' from region states. 2023-07-21 11:16:22,960 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938182959"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:22,962 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 11:16:22,962 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c596215d2e7de8ecd45a7ecc52e6ec92, NAME => 'Group_testCreateAndDrop,,1689938181848.c596215d2e7de8ecd45a7ecc52e6ec92.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 11:16:22,962 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndDrop' as deleted. 2023-07-21 11:16:22,962 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938182962"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:22,964 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndDrop state from META 2023-07-21 11:16:22,965 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=91, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:22,970 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop in 43 msec 2023-07-21 11:16:23,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-21 11:16:23,040 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndDrop, procId: 91 completed 2023-07-21 11:16:23,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:23,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:23,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:23,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:23,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:23,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:23,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:23,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:23,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:23,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:23,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:23,056 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:23,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:23,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:23,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:23,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:23,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:23,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:23,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:23,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:23,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:23,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 458 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939383067, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:23,067 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:23,072 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:23,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:23,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:23,074 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:23,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:23,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:23,100 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=520 (was 522), OpenFileDescriptor=814 (was 819), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=839 (was 839), ProcessCount=186 (was 186), AvailableMemoryMB=3485 (was 3658) 2023-07-21 11:16:23,100 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-21 11:16:23,119 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=520, OpenFileDescriptor=814, MaxFileDescriptor=60000, SystemLoadAverage=839, ProcessCount=186, AvailableMemoryMB=3484 2023-07-21 11:16:23,119 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-21 11:16:23,119 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(132): testCloneSnapshot 2023-07-21 11:16:23,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:23,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:23,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:23,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:23,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:23,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:23,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:23,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:23,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:23,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:23,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:23,136 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:23,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:23,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:23,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:23,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:23,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:23,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:23,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:23,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:23,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:23,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 486 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939383148, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:23,148 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:23,150 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:23,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:23,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:23,152 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:23,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:23,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:23,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:23,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=92, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:23,162 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=92, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:16:23,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testCloneSnapshot" procId is: 92 2023-07-21 11:16:23,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=92 2023-07-21 11:16:23,165 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:23,166 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:23,167 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:23,174 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=92, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:16:23,176 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496 2023-07-21 11:16:23,180 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496 empty. 2023-07-21 11:16:23,181 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496 2023-07-21 11:16:23,182 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-21 11:16:23,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=92 2023-07-21 11:16:23,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=92 2023-07-21 11:16:23,625 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot/.tabledesc/.tableinfo.0000000001 2023-07-21 11:16:23,632 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => b97d53680440df3772a48699002f8496, NAME => 'Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:23,645 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:23,645 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1604): Closing b97d53680440df3772a48699002f8496, disabling compactions & flushes 2023-07-21 11:16:23,646 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:23,646 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:23,646 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. after waiting 0 ms 2023-07-21 11:16:23,646 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:23,646 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:23,646 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for b97d53680440df3772a48699002f8496: 2023-07-21 11:16:23,648 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=92, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:16:23,649 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689938183649"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938183649"}]},"ts":"1689938183649"} 2023-07-21 11:16:23,651 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:16:23,655 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=92, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:16:23,656 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938183656"}]},"ts":"1689938183656"} 2023-07-21 11:16:23,665 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLING in hbase:meta 2023-07-21 11:16:23,667 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:23,667 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:23,667 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:23,667 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:23,667 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 11:16:23,667 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:23,668 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=b97d53680440df3772a48699002f8496, ASSIGN}] 2023-07-21 11:16:23,673 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=93, ppid=92, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=b97d53680440df3772a48699002f8496, ASSIGN 2023-07-21 11:16:23,675 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=93, ppid=92, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=b97d53680440df3772a48699002f8496, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,40467,1689938170241; forceNewPlan=false, retain=false 2023-07-21 11:16:23,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=92 2023-07-21 11:16:23,826 INFO [jenkins-hbase17:41077] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:16:23,828 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=93 updating hbase:meta row=b97d53680440df3772a48699002f8496, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:23,828 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689938183828"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938183828"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938183828"}]},"ts":"1689938183828"} 2023-07-21 11:16:23,838 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE; OpenRegionProcedure b97d53680440df3772a48699002f8496, server=jenkins-hbase17.apache.org,40467,1689938170241}] 2023-07-21 11:16:23,995 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:23,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b97d53680440df3772a48699002f8496, NAME => 'Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:23,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot b97d53680440df3772a48699002f8496 2023-07-21 11:16:23,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:23,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for b97d53680440df3772a48699002f8496 2023-07-21 11:16:23,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for b97d53680440df3772a48699002f8496 2023-07-21 11:16:23,997 INFO [StoreOpener-b97d53680440df3772a48699002f8496-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region b97d53680440df3772a48699002f8496 2023-07-21 11:16:23,998 DEBUG [StoreOpener-b97d53680440df3772a48699002f8496-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496/test 2023-07-21 11:16:23,998 DEBUG [StoreOpener-b97d53680440df3772a48699002f8496-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496/test 2023-07-21 11:16:23,999 INFO [StoreOpener-b97d53680440df3772a48699002f8496-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b97d53680440df3772a48699002f8496 columnFamilyName test 2023-07-21 11:16:23,999 INFO [StoreOpener-b97d53680440df3772a48699002f8496-1] regionserver.HStore(310): Store=b97d53680440df3772a48699002f8496/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:24,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496 2023-07-21 11:16:24,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496 2023-07-21 11:16:24,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for b97d53680440df3772a48699002f8496 2023-07-21 11:16:24,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:24,005 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened b97d53680440df3772a48699002f8496; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9560783200, jitterRate=-0.10958267748355865}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:24,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for b97d53680440df3772a48699002f8496: 2023-07-21 11:16:24,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496., pid=94, masterSystemTime=1689938183991 2023-07-21 11:16:24,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:24,007 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:24,008 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=93 updating hbase:meta row=b97d53680440df3772a48699002f8496, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,008 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689938184008"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938184008"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938184008"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938184008"}]},"ts":"1689938184008"} 2023-07-21 11:16:24,012 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-21 11:16:24,012 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; OpenRegionProcedure b97d53680440df3772a48699002f8496, server=jenkins-hbase17.apache.org,40467,1689938170241 in 171 msec 2023-07-21 11:16:24,014 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-21 11:16:24,014 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=b97d53680440df3772a48699002f8496, ASSIGN in 344 msec 2023-07-21 11:16:24,015 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=92, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:16:24,015 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938184015"}]},"ts":"1689938184015"} 2023-07-21 11:16:24,016 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLED in hbase:meta 2023-07-21 11:16:24,018 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=92, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:16:24,019 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot in 859 msec 2023-07-21 11:16:24,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=92 2023-07-21 11:16:24,272 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCloneSnapshot, procId: 92 completed 2023-07-21 11:16:24,272 DEBUG [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCloneSnapshot get assigned. Timeout = 60000ms 2023-07-21 11:16:24,272 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:24,281 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(3484): All regions for table Group_testCloneSnapshot assigned to meta. Checking AM states. 2023-07-21 11:16:24,281 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:24,281 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(3504): All regions for table Group_testCloneSnapshot assigned. 2023-07-21 11:16:24,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1583): Client=jenkins//136.243.18.41 snapshot request for:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-21 11:16:24,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] snapshot.SnapshotDescriptionUtils(316): Creation time not specified, setting to:1689938184318 (current time:1689938184318). 2023-07-21 11:16:24,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] snapshot.SnapshotDescriptionUtils(332): Snapshot current TTL value: 0 resetting it to default value: 0 2023-07-21 11:16:24,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] zookeeper.ReadOnlyZKClient(139): Connect 0x3b192830 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:24,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@43567fe1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:24,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:24,362 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:33852, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:24,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3b192830 to 127.0.0.1:61077 2023-07-21 11:16:24,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:24,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] snapshot.SnapshotManager(601): No existing snapshot, attempting snapshot... 2023-07-21 11:16:24,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] snapshot.SnapshotManager(648): Table enabled, starting distributed snapshots for { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-21 11:16:24,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=95, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-21 11:16:24,398 DEBUG [PEWorker-1] locking.LockProcedure(309): LOCKED pid=95, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-21 11:16:24,401 INFO [PEWorker-1] procedure2.TimeoutExecutorThread(81): ADDED pid=95, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE; timeout=600000, timestamp=1689938784400 2023-07-21 11:16:24,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] snapshot.SnapshotManager(653): Started snapshot: { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-21 11:16:24,401 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.TakeSnapshotHandler(174): Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot 2023-07-21 11:16:24,404 DEBUG [PEWorker-5] locking.LockProcedure(242): UNLOCKED pid=95, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-21 11:16:24,410 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE in 18 msec 2023-07-21 11:16:24,410 DEBUG [Listener at localhost.localdomain/33557] client.HBaseAdmin(2418): Waiting a max of 300000 ms for snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }'' to complete. (max 20000 ms per retry) 2023-07-21 11:16:24,410 DEBUG [Listener at localhost.localdomain/33557] client.HBaseAdmin(2428): (#1) Sleeping: 100ms while waiting for snapshot completion. 2023-07-21 11:16:24,410 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-21 11:16:24,412 DEBUG [PEWorker-4] locking.LockProcedure(309): LOCKED pid=96, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-21 11:16:24,417 INFO [PEWorker-4] procedure2.TimeoutExecutorThread(81): ADDED pid=96, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED; timeout=600000, timestamp=1689938784417 2023-07-21 11:16:24,512 DEBUG [Listener at localhost.localdomain/33557] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-21 11:16:24,514 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] procedure.ProcedureCoordinator(165): Submitting procedure Group_testCloneSnapshot_snap 2023-07-21 11:16:24,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-21 11:16:24,520 INFO [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'Group_testCloneSnapshot_snap' 2023-07-21 11:16:24,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] snapshot.SnapshotManager(404): Snapshoting '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' is still in progress! 2023-07-21 11:16:24,522 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-21 11:16:24,522 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'Group_testCloneSnapshot_snap' starting 'acquire' 2023-07-21 11:16:24,522 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'Group_testCloneSnapshot_snap', kicking off acquire phase on members. 2023-07-21 11:16:24,522 DEBUG [Listener at localhost.localdomain/33557] client.HBaseAdmin(2428): (#2) Sleeping: 200ms while waiting for snapshot completion. 2023-07-21 11:16:24,523 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,523 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,524 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 11:16:24,524 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 11:16:24,524 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:24,524 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 11:16:24,524 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 11:16:24,524 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 11:16:24,524 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:24,524 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 11:16:24,524 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:24,524 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,525 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,525 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,525 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,525 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,525 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,525 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,526 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-07-21 11:16:24,526 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,526 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-21 11:16:24,526 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,526 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-21 11:16:24,526 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,526 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-21 11:16:24,526 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-21 11:16:24,528 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 11:16:24,529 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-21 11:16:24,529 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-21 11:16:24,529 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,530 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-21 11:16:24,530 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 11:16:24,530 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:24,530 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,531 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,532 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-21 11:16:24,532 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,532 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-21 11:16:24,537 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-21 11:16:24,540 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-21 11:16:24,540 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-21 11:16:24,544 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-21 11:16:24,544 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-21 11:16:24,545 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-21 11:16:24,544 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-21 11:16:24,547 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-21 11:16:24,548 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-21 11:16:24,548 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-21 11:16:24,548 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-21 11:16:24,547 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-21 11:16:24,548 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,37137,1689938164928' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-21 11:16:24,548 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-21 11:16:24,548 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-21 11:16:24,548 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,40467,1689938170241' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-21 11:16:24,549 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-21 11:16:24,549 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-21 11:16:24,549 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,39805,1689938159444' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-21 11:16:24,548 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-21 11:16:24,550 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-21 11:16:24,550 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-21 11:16:24,550 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,40783,1689938159262' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-21 11:16:24,550 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,551 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,551 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,551 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-21 11:16:24,551 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-21 11:16:24,551 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,551 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,551 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-21 11:16:24,551 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,552 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,552 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,552 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-21 11:16:24,552 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-21 11:16:24,552 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,552 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-21 11:16:24,552 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,552 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-21 11:16:24,552 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-21 11:16:24,552 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-21 11:16:24,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:24,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:24,554 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:24,554 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-21 11:16:24,555 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase17.apache.org,40467,1689938170241' joining acquired barrier for procedure 'Group_testCloneSnapshot_snap' on coordinator 2023-07-21 11:16:24,555 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'Group_testCloneSnapshot_snap' starting 'in-barrier' execution. 2023-07-21 11:16:24,555 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@450f35b7[Count = 0] remaining members to acquire global barrier 2023-07-21 11:16:24,555 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,556 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,556 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,556 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,556 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,556 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,556 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,556 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,556 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,556 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,557 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,557 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,557 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-21 11:16:24,556 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,557 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-21 11:16:24,556 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-21 11:16:24,557 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase17.apache.org,39805,1689938159444' in zk 2023-07-21 11:16:24,557 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,557 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-07-21 11:16:24,557 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-21 11:16:24,557 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase17.apache.org,37137,1689938164928' in zk 2023-07-21 11:16:24,557 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-21 11:16:24,558 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-21 11:16:24,558 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-21 11:16:24,558 DEBUG [member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-21 11:16:24,558 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-21 11:16:24,560 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-21 11:16:24,560 DEBUG [member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-21 11:16:24,561 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-21 11:16:24,562 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-21 11:16:24,562 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase17.apache.org,40783,1689938159262' in zk 2023-07-21 11:16:24,563 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-21 11:16:24,563 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-21 11:16:24,563 DEBUG [member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-21 11:16:24,565 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] snapshot.FlushSnapshotSubprocedure(170): Flush Snapshot Tasks submitted for 1 regions 2023-07-21 11:16:24,565 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(301): Waiting for local region snapshots to finish. 2023-07-21 11:16:24,566 DEBUG [rs(jenkins-hbase17.apache.org,40467,1689938170241)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Starting snapshot operation on Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:24,566 DEBUG [rs(jenkins-hbase17.apache.org,40467,1689938170241)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(110): Flush Snapshotting region Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. started... 2023-07-21 11:16:24,568 DEBUG [rs(jenkins-hbase17.apache.org,40467,1689938170241)-snapshot-pool-0] regionserver.HRegion(2446): Flush status journal for b97d53680440df3772a48699002f8496: 2023-07-21 11:16:24,569 DEBUG [rs(jenkins-hbase17.apache.org,40467,1689938170241)-snapshot-pool-0] snapshot.SnapshotManifest(238): Storing 'Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496.' region-info for snapshot=Group_testCloneSnapshot_snap 2023-07-21 11:16:24,575 DEBUG [rs(jenkins-hbase17.apache.org,40467,1689938170241)-snapshot-pool-0] snapshot.SnapshotManifest(243): Creating references for hfiles 2023-07-21 11:16:24,579 DEBUG [rs(jenkins-hbase17.apache.org,40467,1689938170241)-snapshot-pool-0] snapshot.SnapshotManifest(253): Adding snapshot references for [] hfiles 2023-07-21 11:16:24,596 DEBUG [rs(jenkins-hbase17.apache.org,40467,1689938170241)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(137): ... Flush Snapshotting region Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. completed. 2023-07-21 11:16:24,596 DEBUG [rs(jenkins-hbase17.apache.org,40467,1689938170241)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(140): Closing snapshot operation on Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:24,596 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(312): Completed 1/1 local region snapshots. 2023-07-21 11:16:24,597 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(314): Completed 1 local region snapshots. 2023-07-21 11:16:24,597 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(345): cancelling 0 tasks for snapshot jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,597 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-21 11:16:24,597 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase17.apache.org,40467,1689938170241' in zk 2023-07-21 11:16:24,598 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,598 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,598 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-21 11:16:24,598 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-21 11:16:24,598 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-21 11:16:24,598 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-21 11:16:24,598 DEBUG [member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-21 11:16:24,600 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-21 11:16:24,600 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-21 11:16:24,601 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-21 11:16:24,601 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:24,602 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,603 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:24,603 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:24,603 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-21 11:16:24,604 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-21 11:16:24,604 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:24,605 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:24,605 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,605 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:24,606 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'Group_testCloneSnapshot_snap' member 'jenkins-hbase17.apache.org,40467,1689938170241': 2023-07-21 11:16:24,606 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase17.apache.org,40467,1689938170241' released barrier for procedure'Group_testCloneSnapshot_snap', counting down latch. Waiting for 0 more 2023-07-21 11:16:24,606 INFO [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'Group_testCloneSnapshot_snap' execution completed 2023-07-21 11:16:24,607 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-07-21 11:16:24,607 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-07-21 11:16:24,607 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:Group_testCloneSnapshot_snap 2023-07-21 11:16:24,607 INFO [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure Group_testCloneSnapshot_snapincluding nodes /hbase/online-snapshot/acquired /hbase/online-snapshot/reached /hbase/online-snapshot/abort 2023-07-21 11:16:24,609 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,609 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 11:16:24,610 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,610 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,610 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,610 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,610 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,610 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,610 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,610 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 11:16:24,610 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,610 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,610 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 11:16:24,610 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,613 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 11:16:24,613 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:24,614 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 11:16:24,614 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,615 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,615 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,615 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-21 11:16:24,615 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-21 11:16:24,616 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-21 11:16:24,610 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,610 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 11:16:24,614 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:24,616 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 11:16:24,616 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:24,617 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 11:16:24,617 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:24,617 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:24,617 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-21 11:16:24,617 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,618 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-21 11:16:24,618 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:24,618 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-21 11:16:24,619 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:24,619 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:24,619 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,620 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:24,621 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,621 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,621 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,625 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:24,632 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-21 11:16:24,633 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-21 11:16:24,633 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:24,634 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:24,634 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:24,634 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:24,635 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,635 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,635 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:24,637 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:24,644 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 11:16:24,644 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 11:16:24,644 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 11:16:24,644 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:24,644 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 11:16:24,644 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 11:16:24,644 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 11:16:24,644 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:24,644 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 11:16:24,644 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 11:16:24,644 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 11:16:24,644 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 11:16:24,644 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:24,644 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 11:16:24,645 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 11:16:24,644 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:24,644 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 11:16:24,645 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:24,645 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:24,645 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 11:16:24,645 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:24,645 DEBUG [(jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-21 11:16:24,645 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,645 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:24,645 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,645 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:24,645 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 11:16:24,645 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.EnabledTableSnapshotHandler(97): Done waiting - online snapshot for Group_testCloneSnapshot_snap 2023-07-21 11:16:24,645 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:24,645 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 11:16:24,645 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.SnapshotManifest(484): Convert to Single Snapshot Manifest for Group_testCloneSnapshot_snap 2023-07-21 11:16:24,645 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,645 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:24,647 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:24,647 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,647 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:24,647 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:24,647 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:24,647 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,647 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,649 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.SnapshotManifestV1(126): No regions under directory:hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-21 11:16:24,722 DEBUG [Listener at localhost.localdomain/33557] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-21 11:16:24,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-21 11:16:24,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] snapshot.SnapshotManager(404): Snapshoting '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' is still in progress! 2023-07-21 11:16:24,724 DEBUG [Listener at localhost.localdomain/33557] client.HBaseAdmin(2428): (#3) Sleeping: 300ms while waiting for snapshot completion. 2023-07-21 11:16:25,024 DEBUG [Listener at localhost.localdomain/33557] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-21 11:16:25,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-21 11:16:25,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] snapshot.SnapshotManager(404): Snapshoting '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' is still in progress! 2023-07-21 11:16:25,026 DEBUG [Listener at localhost.localdomain/33557] client.HBaseAdmin(2428): (#4) Sleeping: 500ms while waiting for snapshot completion. 2023-07-21 11:16:25,094 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.SnapshotDescriptionUtils(404): Sentinel is done, just moving the snapshot from hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.hbase-snapshot/Group_testCloneSnapshot_snap 2023-07-21 11:16:25,125 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.TakeSnapshotHandler(229): Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed 2023-07-21 11:16:25,125 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.TakeSnapshotHandler(246): Launching cleanup of working dir:hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-21 11:16:25,125 ERROR [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.TakeSnapshotHandler(251): Couldn't delete snapshot working directory:hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-21 11:16:25,125 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.TakeSnapshotHandler(257): Table snapshot journal : Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot at 1689938184402Consolidate snapshot: Group_testCloneSnapshot_snap at 1689938184645 (+243 ms)Loading Region manifests for Group_testCloneSnapshot_snap at 1689938184646 (+1 ms)Writing data manifest for Group_testCloneSnapshot_snap at 1689938184658 (+12 ms)Verifying snapshot: Group_testCloneSnapshot_snap at 1689938185083 (+425 ms)Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed at 1689938185125 (+42 ms) 2023-07-21 11:16:25,127 DEBUG [PEWorker-2] locking.LockProcedure(242): UNLOCKED pid=96, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-21 11:16:25,129 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED in 721 msec 2023-07-21 11:16:25,527 DEBUG [Listener at localhost.localdomain/33557] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-21 11:16:25,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-21 11:16:25,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] snapshot.SnapshotManager(401): Snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' has completed, notifying client. 2023-07-21 11:16:25,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint(486): Pre-moving table Group_testCloneSnapshot_clone to RSGroup default 2023-07-21 11:16:25,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:25,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:25,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:25,544 ERROR [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(742): TableDescriptor of table {} not found. Skipping the region movement of this table. 2023-07-21 11:16:25,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CLONE_SNAPSHOT_PRE_OPERATION; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689938184318 type: FLUSH version: 2 ttl: 0 ) 2023-07-21 11:16:25,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] snapshot.SnapshotManager(750): Clone snapshot=Group_testCloneSnapshot_snap as table=Group_testCloneSnapshot_clone 2023-07-21 11:16:25,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 11:16:25,581 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot_clone/.tabledesc/.tableinfo.0000000001 2023-07-21 11:16:25,588 INFO [PEWorker-3] snapshot.RestoreSnapshotHelper(177): starting restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689938184318 type: FLUSH version: 2 ttl: 0 2023-07-21 11:16:25,589 DEBUG [PEWorker-3] snapshot.RestoreSnapshotHelper(785): get table regions: hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot_clone 2023-07-21 11:16:25,590 INFO [PEWorker-3] snapshot.RestoreSnapshotHelper(239): region to add: b97d53680440df3772a48699002f8496 2023-07-21 11:16:25,590 INFO [PEWorker-3] snapshot.RestoreSnapshotHelper(585): clone region=b97d53680440df3772a48699002f8496 as 9649030fd5309f6f671000b56f025884 in snapshot Group_testCloneSnapshot_snap 2023-07-21 11:16:25,592 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9649030fd5309f6f671000b56f025884, NAME => 'Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot_clone', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:25,610 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:25,611 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1604): Closing 9649030fd5309f6f671000b56f025884, disabling compactions & flushes 2023-07-21 11:16:25,611 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. 2023-07-21 11:16:25,611 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. 2023-07-21 11:16:25,611 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. after waiting 0 ms 2023-07-21 11:16:25,611 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. 2023-07-21 11:16:25,611 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. 2023-07-21 11:16:25,611 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for 9649030fd5309f6f671000b56f025884: 2023-07-21 11:16:25,611 INFO [PEWorker-3] snapshot.RestoreSnapshotHelper(266): finishing restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689938184318 type: FLUSH version: 2 ttl: 0 2023-07-21 11:16:25,612 INFO [PEWorker-3] procedure.CloneSnapshotProcedure$1(421): Clone snapshot=Group_testCloneSnapshot_snap on table=Group_testCloneSnapshot_clone completed! 2023-07-21 11:16:25,619 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689938185619"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938185619"}]},"ts":"1689938185619"} 2023-07-21 11:16:25,621 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:16:25,622 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938185622"}]},"ts":"1689938185622"} 2023-07-21 11:16:25,624 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLING in hbase:meta 2023-07-21 11:16:25,629 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:25,629 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:25,630 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:25,630 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:25,630 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 11:16:25,630 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:25,630 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=9649030fd5309f6f671000b56f025884, ASSIGN}] 2023-07-21 11:16:25,634 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=9649030fd5309f6f671000b56f025884, ASSIGN 2023-07-21 11:16:25,635 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=9649030fd5309f6f671000b56f025884, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,40783,1689938159262; forceNewPlan=false, retain=false 2023-07-21 11:16:25,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 11:16:25,785 INFO [jenkins-hbase17:41077] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:16:25,787 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=9649030fd5309f6f671000b56f025884, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:25,787 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689938185787"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938185787"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938185787"}]},"ts":"1689938185787"} 2023-07-21 11:16:25,789 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 9649030fd5309f6f671000b56f025884, server=jenkins-hbase17.apache.org,40783,1689938159262}] 2023-07-21 11:16:25,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 11:16:25,929 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:16:25,983 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. 2023-07-21 11:16:25,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9649030fd5309f6f671000b56f025884, NAME => 'Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:25,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot_clone 9649030fd5309f6f671000b56f025884 2023-07-21 11:16:25,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:25,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 9649030fd5309f6f671000b56f025884 2023-07-21 11:16:25,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 9649030fd5309f6f671000b56f025884 2023-07-21 11:16:25,990 INFO [StoreOpener-9649030fd5309f6f671000b56f025884-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region 9649030fd5309f6f671000b56f025884 2023-07-21 11:16:25,993 DEBUG [StoreOpener-9649030fd5309f6f671000b56f025884-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCloneSnapshot_clone/9649030fd5309f6f671000b56f025884/test 2023-07-21 11:16:25,994 DEBUG [StoreOpener-9649030fd5309f6f671000b56f025884-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCloneSnapshot_clone/9649030fd5309f6f671000b56f025884/test 2023-07-21 11:16:25,994 INFO [StoreOpener-9649030fd5309f6f671000b56f025884-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9649030fd5309f6f671000b56f025884 columnFamilyName test 2023-07-21 11:16:25,995 INFO [StoreOpener-9649030fd5309f6f671000b56f025884-1] regionserver.HStore(310): Store=9649030fd5309f6f671000b56f025884/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:25,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCloneSnapshot_clone/9649030fd5309f6f671000b56f025884 2023-07-21 11:16:25,997 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCloneSnapshot_clone/9649030fd5309f6f671000b56f025884 2023-07-21 11:16:26,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 9649030fd5309f6f671000b56f025884 2023-07-21 11:16:26,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCloneSnapshot_clone/9649030fd5309f6f671000b56f025884/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:26,022 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 9649030fd5309f6f671000b56f025884; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11405669280, jitterRate=0.06223572790622711}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:26,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 9649030fd5309f6f671000b56f025884: 2023-07-21 11:16:26,023 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884., pid=99, masterSystemTime=1689938185943 2023-07-21 11:16:26,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. 2023-07-21 11:16:26,027 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. 2023-07-21 11:16:26,027 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=9649030fd5309f6f671000b56f025884, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:26,027 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689938186027"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938186027"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938186027"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938186027"}]},"ts":"1689938186027"} 2023-07-21 11:16:26,041 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-21 11:16:26,041 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 9649030fd5309f6f671000b56f025884, server=jenkins-hbase17.apache.org,40783,1689938159262 in 244 msec 2023-07-21 11:16:26,053 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-21 11:16:26,053 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=9649030fd5309f6f671000b56f025884, ASSIGN in 411 msec 2023-07-21 11:16:26,054 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938186054"}]},"ts":"1689938186054"} 2023-07-21 11:16:26,058 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLED in hbase:meta 2023-07-21 11:16:26,065 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689938184318 type: FLUSH version: 2 ttl: 0 ) in 512 msec 2023-07-21 11:16:26,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 11:16:26,172 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: MODIFY, Table Name: default:Group_testCloneSnapshot_clone, procId: 97 completed 2023-07-21 11:16:26,174 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot 2023-07-21 11:16:26,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testCloneSnapshot 2023-07-21 11:16:26,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:26,184 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938186184"}]},"ts":"1689938186184"} 2023-07-21 11:16:26,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-21 11:16:26,186 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLING in hbase:meta 2023-07-21 11:16:26,188 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot to state=DISABLING 2023-07-21 11:16:26,192 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=b97d53680440df3772a48699002f8496, UNASSIGN}] 2023-07-21 11:16:26,197 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=b97d53680440df3772a48699002f8496, UNASSIGN 2023-07-21 11:16:26,199 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=b97d53680440df3772a48699002f8496, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:26,199 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689938186199"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938186199"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938186199"}]},"ts":"1689938186199"} 2023-07-21 11:16:26,201 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; CloseRegionProcedure b97d53680440df3772a48699002f8496, server=jenkins-hbase17.apache.org,40467,1689938170241}] 2023-07-21 11:16:26,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-21 11:16:26,342 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testCloneSnapshot_clone' 2023-07-21 11:16:26,361 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close b97d53680440df3772a48699002f8496 2023-07-21 11:16:26,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing b97d53680440df3772a48699002f8496, disabling compactions & flushes 2023-07-21 11:16:26,362 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:26,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:26,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. after waiting 0 ms 2023-07-21 11:16:26,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:26,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496/recovered.edits/5.seqid, newMaxSeqId=5, maxSeqId=1 2023-07-21 11:16:26,425 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496. 2023-07-21 11:16:26,425 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for b97d53680440df3772a48699002f8496: 2023-07-21 11:16:26,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed b97d53680440df3772a48699002f8496 2023-07-21 11:16:26,444 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=b97d53680440df3772a48699002f8496, regionState=CLOSED 2023-07-21 11:16:26,445 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689938186444"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938186444"}]},"ts":"1689938186444"} 2023-07-21 11:16:26,465 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-21 11:16:26,465 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; CloseRegionProcedure b97d53680440df3772a48699002f8496, server=jenkins-hbase17.apache.org,40467,1689938170241 in 253 msec 2023-07-21 11:16:26,470 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-21 11:16:26,470 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=b97d53680440df3772a48699002f8496, UNASSIGN in 273 msec 2023-07-21 11:16:26,471 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938186471"}]},"ts":"1689938186471"} 2023-07-21 11:16:26,475 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLED in hbase:meta 2023-07-21 11:16:26,477 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot to state=DISABLED 2023-07-21 11:16:26,480 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot in 303 msec 2023-07-21 11:16:26,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-21 11:16:26,498 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot, procId: 100 completed 2023-07-21 11:16:26,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testCloneSnapshot 2023-07-21 11:16:26,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:26,503 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=103, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:26,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot' from rsgroup 'default' 2023-07-21 11:16:26,504 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=103, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:26,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:26,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:26,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:26,508 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496 2023-07-21 11:16:26,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-21 11:16:26,509 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496/recovered.edits, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496/test] 2023-07-21 11:16:26,516 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496/recovered.edits/5.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496/recovered.edits/5.seqid 2023-07-21 11:16:26,518 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot/b97d53680440df3772a48699002f8496 2023-07-21 11:16:26,518 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-21 11:16:26,520 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=103, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:26,523 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot from hbase:meta 2023-07-21 11:16:26,526 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot' descriptor. 2023-07-21 11:16:26,528 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=103, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:26,528 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot' from region states. 2023-07-21 11:16:26,529 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938186529"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:26,536 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 11:16:26,536 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b97d53680440df3772a48699002f8496, NAME => 'Group_testCloneSnapshot,,1689938183159.b97d53680440df3772a48699002f8496.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 11:16:26,536 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot' as deleted. 2023-07-21 11:16:26,536 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938186536"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:26,553 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot state from META 2023-07-21 11:16:26,565 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=103, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:26,568 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot in 66 msec 2023-07-21 11:16:26,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-21 11:16:26,609 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot, procId: 103 completed 2023-07-21 11:16:26,609 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot_clone 2023-07-21 11:16:26,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testCloneSnapshot_clone 2023-07-21 11:16:26,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 11:16:26,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-21 11:16:26,614 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938186614"}]},"ts":"1689938186614"} 2023-07-21 11:16:26,616 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLING in hbase:meta 2023-07-21 11:16:26,617 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot_clone to state=DISABLING 2023-07-21 11:16:26,618 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=9649030fd5309f6f671000b56f025884, UNASSIGN}] 2023-07-21 11:16:26,620 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=9649030fd5309f6f671000b56f025884, UNASSIGN 2023-07-21 11:16:26,621 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=9649030fd5309f6f671000b56f025884, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:26,621 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689938186621"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938186621"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938186621"}]},"ts":"1689938186621"} 2023-07-21 11:16:26,623 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=105, state=RUNNABLE; CloseRegionProcedure 9649030fd5309f6f671000b56f025884, server=jenkins-hbase17.apache.org,40783,1689938159262}] 2023-07-21 11:16:26,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-21 11:16:26,776 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 9649030fd5309f6f671000b56f025884 2023-07-21 11:16:26,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 9649030fd5309f6f671000b56f025884, disabling compactions & flushes 2023-07-21 11:16:26,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. 2023-07-21 11:16:26,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. 2023-07-21 11:16:26,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. after waiting 0 ms 2023-07-21 11:16:26,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. 2023-07-21 11:16:26,781 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/default/Group_testCloneSnapshot_clone/9649030fd5309f6f671000b56f025884/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:26,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884. 2023-07-21 11:16:26,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 9649030fd5309f6f671000b56f025884: 2023-07-21 11:16:26,785 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 9649030fd5309f6f671000b56f025884 2023-07-21 11:16:26,785 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=9649030fd5309f6f671000b56f025884, regionState=CLOSED 2023-07-21 11:16:26,785 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689938186785"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938186785"}]},"ts":"1689938186785"} 2023-07-21 11:16:26,788 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=105 2023-07-21 11:16:26,788 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=105, state=SUCCESS; CloseRegionProcedure 9649030fd5309f6f671000b56f025884, server=jenkins-hbase17.apache.org,40783,1689938159262 in 163 msec 2023-07-21 11:16:26,790 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-21 11:16:26,790 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=9649030fd5309f6f671000b56f025884, UNASSIGN in 170 msec 2023-07-21 11:16:26,790 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938186790"}]},"ts":"1689938186790"} 2023-07-21 11:16:26,792 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLED in hbase:meta 2023-07-21 11:16:26,793 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot_clone to state=DISABLED 2023-07-21 11:16:26,794 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone in 184 msec 2023-07-21 11:16:26,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-21 11:16:26,916 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot_clone, procId: 104 completed 2023-07-21 11:16:26,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testCloneSnapshot_clone 2023-07-21 11:16:26,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 11:16:26,919 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 11:16:26,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot_clone' from rsgroup 'default' 2023-07-21 11:16:26,920 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=107, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 11:16:26,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:26,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:26,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:26,924 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot_clone/9649030fd5309f6f671000b56f025884 2023-07-21 11:16:26,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-21 11:16:26,925 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot_clone/9649030fd5309f6f671000b56f025884/recovered.edits, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot_clone/9649030fd5309f6f671000b56f025884/test] 2023-07-21 11:16:26,929 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot_clone/9649030fd5309f6f671000b56f025884/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/default/Group_testCloneSnapshot_clone/9649030fd5309f6f671000b56f025884/recovered.edits/4.seqid 2023-07-21 11:16:26,931 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/default/Group_testCloneSnapshot_clone/9649030fd5309f6f671000b56f025884 2023-07-21 11:16:26,931 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot_clone regions 2023-07-21 11:16:26,934 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=107, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 11:16:26,936 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot_clone from hbase:meta 2023-07-21 11:16:26,937 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot_clone' descriptor. 2023-07-21 11:16:26,939 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=107, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 11:16:26,939 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot_clone' from region states. 2023-07-21 11:16:26,939 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938186939"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:26,941 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 11:16:26,941 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9649030fd5309f6f671000b56f025884, NAME => 'Group_testCloneSnapshot_clone,,1689938183159.9649030fd5309f6f671000b56f025884.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 11:16:26,941 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot_clone' as deleted. 2023-07-21 11:16:26,941 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938186941"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:26,945 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot_clone state from META 2023-07-21 11:16:26,947 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=107, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 11:16:26,947 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone in 30 msec 2023-07-21 11:16:27,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-21 11:16:27,025 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot_clone, procId: 107 completed 2023-07-21 11:16:27,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:27,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:27,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:27,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:27,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:27,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:27,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:27,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:27,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:27,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:27,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:27,041 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:27,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:27,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:27,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:27,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:27,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:27,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:27,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:27,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:27,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:27,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939387076, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:27,076 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:27,078 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:27,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:27,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:27,079 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:27,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:27,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:27,100 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=524 (was 520) Potentially hanging thread: hconnection-0x4b141945-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1760670747_17 at /127.0.0.1:39998 [Waiting for operation #14] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase17.apache.org,40467,1689938170241' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase17.apache.org,39805,1689938159444' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1662154281_17 at /127.0.0.1:50728 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4543071c-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: (jenkins-hbase17.apache.org,41077,1689938157103)-proc-coordinator-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase17.apache.org,40783,1689938159262' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2b8fd83-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b141945-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1662154281_17 at /127.0.0.1:41314 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase17.apache.org,37137,1689938164928' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=811 (was 814), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=852 (was 839) - SystemLoadAverage LEAK? -, ProcessCount=186 (was 186), AvailableMemoryMB=3194 (was 3484) 2023-07-21 11:16:27,100 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=524 is superior to 500 2023-07-21 11:16:27,117 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=524, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=852, ProcessCount=186, AvailableMemoryMB=3193 2023-07-21 11:16:27,117 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=524 is superior to 500 2023-07-21 11:16:27,117 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(132): testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:27,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:27,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:27,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:27,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:27,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:27,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:27,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:27,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:27,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:27,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:27,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:27,132 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:27,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:27,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:27,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:27,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:27,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:27,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:27,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:27,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:27,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:27,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 604 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939387143, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:27,143 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:27,145 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:27,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:27,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:27,146 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:27,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:27,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:27,147 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBasics(141): testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:27,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:27,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:27,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup appInfo 2023-07-21 11:16:27,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:27,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:27,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 11:16:27,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:27,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:27,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:27,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:27,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37137] to rsgroup appInfo 2023-07-21 11:16:27,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:27,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:27,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 11:16:27,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:27,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 11:16:27,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37137,1689938164928] are moved back to default 2023-07-21 11:16:27,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-21 11:16:27,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:27,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:27,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:27,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=appInfo 2023-07-21 11:16:27,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:27,176 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-21 11:16:27,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.ServerManager(636): Server jenkins-hbase17.apache.org,37137,1689938164928 added to draining server list. 2023-07-21 11:16:27,178 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/draining/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:27,179 WARN [zk-event-processor-pool-0] master.ServerManager(632): Server jenkins-hbase17.apache.org,37137,1689938164928 is already in the draining server list.Ignoring request to add it again. 2023-07-21 11:16:27,179 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(92): Draining RS node created, adding to list [jenkins-hbase17.apache.org,37137,1689938164928] 2023-07-21 11:16:27,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$15(3014): Client=jenkins//136.243.18.41 creating {NAME => 'Group_ns', hbase.rsgroup.name => 'appInfo'} 2023-07-21 11:16:27,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=108, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_ns 2023-07-21 11:16:27,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-21 11:16:27,190 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:16:27,195 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=108, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns in 13 msec 2023-07-21 11:16:27,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-21 11:16:27,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:27,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:27,293 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=109, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:16:27,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 109 2023-07-21 11:16:27,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-21 11:16:27,322 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=109, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers exec-time=28 msec 2023-07-21 11:16:27,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-21 11:16:27,400 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 109 failed with No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to 2023-07-21 11:16:27,400 DEBUG [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBasics(162): create table error org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to at java.lang.Thread.getStackTrace(Thread.java:1564) at org.apache.hadoop.hbase.util.FutureUtils.setStackTrace(FutureUtils.java:130) at org.apache.hadoop.hbase.util.FutureUtils.rethrow(FutureUtils.java:149) at org.apache.hadoop.hbase.util.FutureUtils.get(FutureUtils.java:186) at org.apache.hadoop.hbase.client.Admin.createTable(Admin.java:302) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.testCreateWhenRsgroupNoOnlineServers(TestRSGroupsBasics.java:159) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) at --------Future.get--------(Unknown Source) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.validateRSGroup(RSGroupAdminEndpoint.java:540) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.moveTableToValidRSGroup(RSGroupAdminEndpoint.java:529) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateTableAction(RSGroupAdminEndpoint.java:501) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:371) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateTableAction(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.preCreate(CreateTableProcedure.java:267) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:93) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-21 11:16:27,406 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/draining/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:27,406 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-21 11:16:27,406 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(109): Draining RS node deleted, removing from list [jenkins-hbase17.apache.org,37137,1689938164928] 2023-07-21 11:16:27,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:27,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:27,412 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=110, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:16:27,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 110 2023-07-21 11:16:27,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 11:16:27,414 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:27,415 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:27,415 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 11:16:27,415 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:27,417 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=110, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:16:27,418 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:27,419 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78 empty. 2023-07-21 11:16:27,419 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:27,419 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-21 11:16:27,431 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/.tabledesc/.tableinfo.0000000001 2023-07-21 11:16:27,432 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3cc7f40d3f01f6c91efc8b213b12db78, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:27,442 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:27,442 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1604): Closing 3cc7f40d3f01f6c91efc8b213b12db78, disabling compactions & flushes 2023-07-21 11:16:27,442 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. 2023-07-21 11:16:27,442 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. 2023-07-21 11:16:27,442 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. after waiting 0 ms 2023-07-21 11:16:27,442 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. 2023-07-21 11:16:27,442 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. 2023-07-21 11:16:27,442 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1558): Region close journal for 3cc7f40d3f01f6c91efc8b213b12db78: 2023-07-21 11:16:27,445 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=110, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:16:27,445 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938187445"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938187445"}]},"ts":"1689938187445"} 2023-07-21 11:16:27,447 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:16:27,447 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=110, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:16:27,448 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938187447"}]},"ts":"1689938187447"} 2023-07-21 11:16:27,449 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLING in hbase:meta 2023-07-21 11:16:27,451 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=3cc7f40d3f01f6c91efc8b213b12db78, ASSIGN}] 2023-07-21 11:16:27,453 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=3cc7f40d3f01f6c91efc8b213b12db78, ASSIGN 2023-07-21 11:16:27,454 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=3cc7f40d3f01f6c91efc8b213b12db78, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,37137,1689938164928; forceNewPlan=false, retain=false 2023-07-21 11:16:27,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 11:16:27,605 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=3cc7f40d3f01f6c91efc8b213b12db78, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:27,605 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938187605"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938187605"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938187605"}]},"ts":"1689938187605"} 2023-07-21 11:16:27,607 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; OpenRegionProcedure 3cc7f40d3f01f6c91efc8b213b12db78, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:27,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 11:16:27,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. 2023-07-21 11:16:27,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3cc7f40d3f01f6c91efc8b213b12db78, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:27,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testCreateWhenRsgroupNoOnlineServers 3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:27,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:27,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:27,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:27,772 INFO [StoreOpener-3cc7f40d3f01f6c91efc8b213b12db78-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:27,774 DEBUG [StoreOpener-3cc7f40d3f01f6c91efc8b213b12db78-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78/f 2023-07-21 11:16:27,775 DEBUG [StoreOpener-3cc7f40d3f01f6c91efc8b213b12db78-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78/f 2023-07-21 11:16:27,775 INFO [StoreOpener-3cc7f40d3f01f6c91efc8b213b12db78-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3cc7f40d3f01f6c91efc8b213b12db78 columnFamilyName f 2023-07-21 11:16:27,776 INFO [StoreOpener-3cc7f40d3f01f6c91efc8b213b12db78-1] regionserver.HStore(310): Store=3cc7f40d3f01f6c91efc8b213b12db78/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:27,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:27,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:27,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:27,805 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:27,806 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 3cc7f40d3f01f6c91efc8b213b12db78; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11431391040, jitterRate=0.06463125348091125}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:27,807 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 3cc7f40d3f01f6c91efc8b213b12db78: 2023-07-21 11:16:27,809 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78., pid=112, masterSystemTime=1689938187760 2023-07-21 11:16:27,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. 2023-07-21 11:16:27,811 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. 2023-07-21 11:16:27,811 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=3cc7f40d3f01f6c91efc8b213b12db78, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:27,811 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938187811"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938187811"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938187811"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938187811"}]},"ts":"1689938187811"} 2023-07-21 11:16:27,821 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-21 11:16:27,821 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; OpenRegionProcedure 3cc7f40d3f01f6c91efc8b213b12db78, server=jenkins-hbase17.apache.org,37137,1689938164928 in 212 msec 2023-07-21 11:16:27,824 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-21 11:16:27,824 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=3cc7f40d3f01f6c91efc8b213b12db78, ASSIGN in 370 msec 2023-07-21 11:16:27,825 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=110, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:16:27,825 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938187825"}]},"ts":"1689938187825"} 2023-07-21 11:16:27,827 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLED in hbase:meta 2023-07-21 11:16:27,829 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=110, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:16:27,833 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 421 msec 2023-07-21 11:16:28,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 11:16:28,017 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 110 completed 2023-07-21 11:16:28,018 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:28,023 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$15(890): Started disable of Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:28,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:28,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:28,028 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938188028"}]},"ts":"1689938188028"} 2023-07-21 11:16:28,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-21 11:16:28,032 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLING in hbase:meta 2023-07-21 11:16:28,033 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLING 2023-07-21 11:16:28,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=3cc7f40d3f01f6c91efc8b213b12db78, UNASSIGN}] 2023-07-21 11:16:28,036 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=3cc7f40d3f01f6c91efc8b213b12db78, UNASSIGN 2023-07-21 11:16:28,039 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=3cc7f40d3f01f6c91efc8b213b12db78, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:28,040 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938188039"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938188039"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938188039"}]},"ts":"1689938188039"} 2023-07-21 11:16:28,047 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 3cc7f40d3f01f6c91efc8b213b12db78, server=jenkins-hbase17.apache.org,37137,1689938164928}] 2023-07-21 11:16:28,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-21 11:16:28,201 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:28,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 3cc7f40d3f01f6c91efc8b213b12db78, disabling compactions & flushes 2023-07-21 11:16:28,205 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. 2023-07-21 11:16:28,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. 2023-07-21 11:16:28,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. after waiting 0 ms 2023-07-21 11:16:28,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. 2023-07-21 11:16:28,209 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:28,210 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78. 2023-07-21 11:16:28,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 3cc7f40d3f01f6c91efc8b213b12db78: 2023-07-21 11:16:28,213 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:28,214 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=3cc7f40d3f01f6c91efc8b213b12db78, regionState=CLOSED 2023-07-21 11:16:28,214 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689938188214"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938188214"}]},"ts":"1689938188214"} 2023-07-21 11:16:28,217 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-21 11:16:28,218 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 3cc7f40d3f01f6c91efc8b213b12db78, server=jenkins-hbase17.apache.org,37137,1689938164928 in 174 msec 2023-07-21 11:16:28,222 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-21 11:16:28,222 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=3cc7f40d3f01f6c91efc8b213b12db78, UNASSIGN in 183 msec 2023-07-21 11:16:28,223 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938188223"}]},"ts":"1689938188223"} 2023-07-21 11:16:28,228 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLED in hbase:meta 2023-07-21 11:16:28,230 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLED 2023-07-21 11:16:28,231 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 207 msec 2023-07-21 11:16:28,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-21 11:16:28,333 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 113 completed 2023-07-21 11:16:28,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:28,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:28,336 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:28,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from rsgroup 'appInfo' 2023-07-21 11:16:28,337 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=116, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:28,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:28,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:28,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 11:16:28,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:28,340 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:28,342 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78/f, FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78/recovered.edits] 2023-07-21 11:16:28,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-21 11:16:28,347 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78/recovered.edits/4.seqid to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78/recovered.edits/4.seqid 2023-07-21 11:16:28,347 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/3cc7f40d3f01f6c91efc8b213b12db78 2023-07-21 11:16:28,347 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-21 11:16:28,350 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=116, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:28,352 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_ns:testCreateWhenRsgroupNoOnlineServers from hbase:meta 2023-07-21 11:16:28,354 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' descriptor. 2023-07-21 11:16:28,355 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=116, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:28,355 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from region states. 2023-07-21 11:16:28,355 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689938188355"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:28,357 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 11:16:28,357 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 3cc7f40d3f01f6c91efc8b213b12db78, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689938187409.3cc7f40d3f01f6c91efc8b213b12db78.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 11:16:28,357 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_ns:testCreateWhenRsgroupNoOnlineServers' as deleted. 2023-07-21 11:16:28,357 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689938188357"}]},"ts":"9223372036854775807"} 2023-07-21 11:16:28,358 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_ns:testCreateWhenRsgroupNoOnlineServers state from META 2023-07-21 11:16:28,360 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=116, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:28,360 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 26 msec 2023-07-21 11:16:28,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-21 11:16:28,444 INFO [Listener at localhost.localdomain/33557] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 116 completed 2023-07-21 11:16:28,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.HMaster$17(3086): Client=jenkins//136.243.18.41 delete Group_ns 2023-07-21 11:16:28,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 11:16:28,449 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=117, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 11:16:28,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-21 11:16:28,452 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=117, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 11:16:28,453 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=117, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 11:16:28,454 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_ns 2023-07-21 11:16:28,454 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 11:16:28,455 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=117, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 11:16:28,457 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=117, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 11:16:28,458 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns in 10 msec 2023-07-21 11:16:28,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-21 11:16:28,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:28,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:28,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:28,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:28,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:28,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:28,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:28,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:28,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:28,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 11:16:28,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 11:16:28,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:28,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:28,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:28,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:28,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37137] to rsgroup default 2023-07-21 11:16:28,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:28,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 11:16:28,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:28,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-21 11:16:28,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37137,1689938164928] are moved back to appInfo 2023-07-21 11:16:28,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-21 11:16:28,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:28,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup appInfo 2023-07-21 11:16:28,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:28,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:28,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:28,577 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:28,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:28,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:28,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:28,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:28,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:28,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:28,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:28,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:28,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:28,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 706 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939388588, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:28,589 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:28,590 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:28,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:28,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:28,591 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:28,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:28,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:28,614 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=527 (was 524) - Thread LEAK? -, OpenFileDescriptor=811 (was 811), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=852 (was 852), ProcessCount=188 (was 186) - ProcessCount LEAK? -, AvailableMemoryMB=3180 (was 3193) 2023-07-21 11:16:28,614 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=527 is superior to 500 2023-07-21 11:16:28,634 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=527, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=852, ProcessCount=188, AvailableMemoryMB=3180 2023-07-21 11:16:28,635 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=527 is superior to 500 2023-07-21 11:16:28,635 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(132): testBasicStartUp 2023-07-21 11:16:28,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:28,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:28,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:28,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:28,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:28,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:28,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:28,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:28,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:28,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:28,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:28,649 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:28,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:28,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:28,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:28,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:28,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:28,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:28,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:28,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:28,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:28,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 734 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939388659, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:28,665 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:28,666 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:28,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:28,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:28,668 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:28,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:28,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:28,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:28,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:28,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:28,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:28,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:28,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:28,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:28,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:28,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:28,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:28,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:28,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:28,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:28,691 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:28,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:28,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:28,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:28,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:28,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:28,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:28,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:28,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:28,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:28,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 764 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939388708, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:28,711 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:28,712 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:28,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:28,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:28,719 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:28,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:28,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:28,753 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=528 (was 527) - Thread LEAK? -, OpenFileDescriptor=811 (was 811), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=840 (was 852), ProcessCount=186 (was 188), AvailableMemoryMB=3188 (was 3180) - AvailableMemoryMB LEAK? - 2023-07-21 11:16:28,754 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=528 is superior to 500 2023-07-21 11:16:28,776 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=528, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=840, ProcessCount=186, AvailableMemoryMB=3187 2023-07-21 11:16:28,777 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=528 is superior to 500 2023-07-21 11:16:28,777 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(132): testRSGroupsWithHBaseQuota 2023-07-21 11:16:28,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:28,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:28,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:28,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:28,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:28,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:28,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:28,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:28,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:28,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:28,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:28,792 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:28,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:28,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:28,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:28,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:28,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:28,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:28,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:28,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41077] to rsgroup master 2023-07-21 11:16:28,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:28,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] ipc.CallRunner(144): callId: 792 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:49392 deadline: 1689939388817, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. 2023-07-21 11:16:28,817 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor64.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:41077 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:28,819 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:28,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:28,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:28,821 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37137, jenkins-hbase17.apache.org:39805, jenkins-hbase17.apache.org:40467, jenkins-hbase17.apache.org:40783], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:28,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:28,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41077] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:28,822 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-21 11:16:28,822 INFO [Listener at localhost.localdomain/33557] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 11:16:28,822 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x45869290 to 127.0.0.1:61077 2023-07-21 11:16:28,822 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:28,823 DEBUG [Listener at localhost.localdomain/33557] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 11:16:28,823 DEBUG [Listener at localhost.localdomain/33557] util.JVMClusterUtil(257): Found active master hash=753832651, stopped=false 2023-07-21 11:16:28,824 DEBUG [Listener at localhost.localdomain/33557] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 11:16:28,824 DEBUG [Listener at localhost.localdomain/33557] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 11:16:28,824 INFO [Listener at localhost.localdomain/33557] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,41077,1689938157103 2023-07-21 11:16:28,825 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:28,825 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:28,825 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:28,825 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:28,825 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:28,825 INFO [Listener at localhost.localdomain/33557] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 11:16:28,825 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:28,825 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:28,825 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:28,825 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:28,826 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:28,826 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:28,826 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1512fdf2 to 127.0.0.1:61077 2023-07-21 11:16:28,826 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:28,827 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,40783,1689938159262' ***** 2023-07-21 11:16:28,827 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:16:28,827 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,39805,1689938159444' ***** 2023-07-21 11:16:28,827 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:16:28,827 INFO [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:28,827 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,37137,1689938164928' ***** 2023-07-21 11:16:28,827 INFO [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:28,827 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:16:28,835 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,40467,1689938170241' ***** 2023-07-21 11:16:28,835 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:16:28,836 INFO [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:28,837 INFO [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:28,838 INFO [RS:1;jenkins-hbase17:39805] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3ff1fc24{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:28,838 INFO [RS:0;jenkins-hbase17:40783] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6d6a5bc{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:28,841 INFO [RS:3;jenkins-hbase17:37137] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6487d5f1{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:28,841 INFO [RS:4;jenkins-hbase17:40467] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@dd3dd9f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:28,841 INFO [RS:1;jenkins-hbase17:39805] server.AbstractConnector(383): Stopped ServerConnector@1aae97ce{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:28,842 INFO [RS:1;jenkins-hbase17:39805] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:28,842 INFO [RS:4;jenkins-hbase17:40467] server.AbstractConnector(383): Stopped ServerConnector@303421fd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:28,842 INFO [RS:4;jenkins-hbase17:40467] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:28,842 INFO [RS:3;jenkins-hbase17:37137] server.AbstractConnector(383): Stopped ServerConnector@34763ecd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:28,842 INFO [RS:3;jenkins-hbase17:37137] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:28,844 INFO [RS:0;jenkins-hbase17:40783] server.AbstractConnector(383): Stopped ServerConnector@8f1e840{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:28,844 INFO [RS:0;jenkins-hbase17:40783] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:28,849 INFO [RS:1;jenkins-hbase17:39805] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5d364a00{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:28,849 INFO [RS:3;jenkins-hbase17:37137] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@16316a5d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:28,850 INFO [RS:1;jenkins-hbase17:39805] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2c951266{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:28,849 INFO [RS:0;jenkins-hbase17:40783] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@184daf7c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:28,849 INFO [RS:4;jenkins-hbase17:40467] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1bc5d98b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:28,852 INFO [RS:0;jenkins-hbase17:40783] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@64e08883{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:28,851 INFO [RS:3;jenkins-hbase17:37137] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e4aece7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:28,853 INFO [RS:4;jenkins-hbase17:40467] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@66c4a3b5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:28,853 INFO [RS:1;jenkins-hbase17:39805] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:28,853 INFO [RS:1;jenkins-hbase17:39805] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:28,853 INFO [RS:1;jenkins-hbase17:39805] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:28,854 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:28,854 INFO [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:28,854 DEBUG [RS:1;jenkins-hbase17:39805] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7aac64f5 to 127.0.0.1:61077 2023-07-21 11:16:28,854 DEBUG [RS:1;jenkins-hbase17:39805] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:28,854 INFO [RS:1;jenkins-hbase17:39805] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:28,854 INFO [RS:1;jenkins-hbase17:39805] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:28,854 INFO [RS:1;jenkins-hbase17:39805] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:28,854 INFO [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 11:16:28,854 INFO [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 11:16:28,854 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-21 11:16:28,854 INFO [RS:0;jenkins-hbase17:40783] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:28,855 INFO [RS:0;jenkins-hbase17:40783] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:28,855 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:16:28,855 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:28,855 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:16:28,855 INFO [RS:0;jenkins-hbase17:40783] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:28,855 INFO [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(3305): Received CLOSE for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:28,855 INFO [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:28,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2bd94f497343684e2f5a451c6e430d4d, disabling compactions & flushes 2023-07-21 11:16:28,855 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:16:28,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:28,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:28,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. after waiting 0 ms 2023-07-21 11:16:28,855 DEBUG [RS:0;jenkins-hbase17:40783] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x294ce4c7 to 127.0.0.1:61077 2023-07-21 11:16:28,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:28,855 INFO [RS:3;jenkins-hbase17:37137] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:28,856 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 2bd94f497343684e2f5a451c6e430d4d 1/1 column families, dataSize=365 B heapSize=1.13 KB 2023-07-21 11:16:28,856 INFO [RS:3;jenkins-hbase17:37137] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:28,856 INFO [RS:3;jenkins-hbase17:37137] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:28,856 INFO [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:28,856 DEBUG [RS:3;jenkins-hbase17:37137] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0a330210 to 127.0.0.1:61077 2023-07-21 11:16:28,856 DEBUG [RS:3;jenkins-hbase17:37137] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:28,855 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-21 11:16:28,855 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:16:28,856 INFO [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,37137,1689938164928; all regions closed. 2023-07-21 11:16:28,856 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:28,855 DEBUG [RS:0;jenkins-hbase17:40783] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:28,856 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:16:28,857 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=15.27 KB heapSize=25.55 KB 2023-07-21 11:16:28,856 INFO [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 11:16:28,857 DEBUG [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1478): Online Regions={2bd94f497343684e2f5a451c6e430d4d=hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.} 2023-07-21 11:16:28,857 DEBUG [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1504): Waiting on 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:28,857 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 11:16:28,857 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 11:16:28,860 INFO [RS:4;jenkins-hbase17:40467] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:28,861 INFO [RS:4;jenkins-hbase17:40467] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:28,861 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:28,861 INFO [RS:4;jenkins-hbase17:40467] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:28,861 INFO [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(3305): Received CLOSE for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:28,861 INFO [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:28,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2782e41606006289532e239f665ea4eb, disabling compactions & flushes 2023-07-21 11:16:28,861 DEBUG [RS:4;jenkins-hbase17:40467] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x46f5c2a2 to 127.0.0.1:61077 2023-07-21 11:16:28,861 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:28,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:28,861 DEBUG [RS:4;jenkins-hbase17:40467] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:28,861 INFO [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 11:16:28,861 DEBUG [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1478): Online Regions={2782e41606006289532e239f665ea4eb=hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.} 2023-07-21 11:16:28,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. after waiting 0 ms 2023-07-21 11:16:28,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:28,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 2782e41606006289532e239f665ea4eb 1/1 column families, dataSize=9.72 KB heapSize=15.93 KB 2023-07-21 11:16:28,862 DEBUG [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1504): Waiting on 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:28,886 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:28,886 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:28,886 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:28,890 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:28,897 DEBUG [RS:3;jenkins-hbase17:37137] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:28,897 INFO [RS:3;jenkins-hbase17:37137] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C37137%2C1689938164928.meta:.meta(num 1689938167630) 2023-07-21 11:16:28,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=365 B at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/.tmp/info/db07fdd1032644e6999e588b237b5bc3 2023-07-21 11:16:28,989 DEBUG [RS:3;jenkins-hbase17:37137] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:28,989 INFO [RS:3;jenkins-hbase17:37137] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C37137%2C1689938164928:(num 1689938165791) 2023-07-21 11:16:28,989 DEBUG [RS:3;jenkins-hbase17:37137] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:28,989 INFO [RS:3;jenkins-hbase17:37137] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:28,991 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.72 KB at sequenceid=79 (bloomFilter=true), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/3e23205333ea45fca4f644908fd8226c 2023-07-21 11:16:28,992 INFO [RS:3;jenkins-hbase17:37137] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:29,001 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3e23205333ea45fca4f644908fd8226c 2023-07-21 11:16:29,013 INFO [RS:3;jenkins-hbase17:37137] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:29,013 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:29,013 INFO [RS:3;jenkins-hbase17:37137] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:29,013 INFO [RS:3;jenkins-hbase17:37137] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:29,015 INFO [RS:3;jenkins-hbase17:37137] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:37137 2023-07-21 11:16:29,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/3e23205333ea45fca4f644908fd8226c as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/3e23205333ea45fca4f644908fd8226c 2023-07-21 11:16:29,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for db07fdd1032644e6999e588b237b5bc3 2023-07-21 11:16:29,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/.tmp/info/db07fdd1032644e6999e588b237b5bc3 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info/db07fdd1032644e6999e588b237b5bc3 2023-07-21 11:16:29,023 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 11:16:29,023 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 11:16:29,023 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for db07fdd1032644e6999e588b237b5bc3 2023-07-21 11:16:29,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info/db07fdd1032644e6999e588b237b5bc3, entries=5, sequenceid=11, filesize=5.1 K 2023-07-21 11:16:29,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~365 B/365, heapSize ~1.11 KB/1136, currentSize=0 B/0 for 2bd94f497343684e2f5a451c6e430d4d in 169ms, sequenceid=11, compaction requested=false 2023-07-21 11:16:29,026 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 11:16:29,026 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 11:16:29,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3e23205333ea45fca4f644908fd8226c 2023-07-21 11:16:29,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/3e23205333ea45fca4f644908fd8226c, entries=14, sequenceid=79, filesize=5.5 K 2023-07-21 11:16:29,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.72 KB/9952, heapSize ~15.91 KB/16296, currentSize=0 B/0 for 2782e41606006289532e239f665ea4eb in 174ms, sequenceid=79, compaction requested=true 2023-07-21 11:16:29,048 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/recovered.edits/82.seqid, newMaxSeqId=82, maxSeqId=40 2023-07-21 11:16:29,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/recovered.edits/14.seqid, newMaxSeqId=14, maxSeqId=1 2023-07-21 11:16:29,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:29,050 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:29,050 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:29,050 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:29,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:29,051 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2bd94f497343684e2f5a451c6e430d4d: 2023-07-21 11:16:29,051 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:29,056 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-21 11:16:29,057 INFO [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,40783,1689938159262; all regions closed. 2023-07-21 11:16:29,063 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:29,063 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:29,063 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:29,063 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:29,063 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:29,063 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:29,064 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:29,064 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,37137,1689938164928 2023-07-21 11:16:29,064 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:29,065 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,37137,1689938164928] 2023-07-21 11:16:29,065 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,37137,1689938164928; numProcessing=1 2023-07-21 11:16:29,065 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,37137,1689938164928 already deleted, retry=false 2023-07-21 11:16:29,065 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,37137,1689938164928 expired; onlineServers=3 2023-07-21 11:16:29,067 DEBUG [RS:0;jenkins-hbase17:40783] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:29,067 INFO [RS:0;jenkins-hbase17:40783] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C40783%2C1689938159262:(num 1689938162268) 2023-07-21 11:16:29,067 DEBUG [RS:0;jenkins-hbase17:40783] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:29,067 INFO [RS:0;jenkins-hbase17:40783] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:29,067 INFO [RS:0;jenkins-hbase17:40783] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:29,067 INFO [RS:0;jenkins-hbase17:40783] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:29,067 INFO [RS:0;jenkins-hbase17:40783] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:29,067 INFO [RS:0;jenkins-hbase17:40783] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:29,067 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:29,068 INFO [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,40467,1689938170241; all regions closed. 2023-07-21 11:16:29,068 INFO [RS:0;jenkins-hbase17:40783] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:40783 2023-07-21 11:16:29,071 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:29,071 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:29,071 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:29,073 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,40783,1689938159262 2023-07-21 11:16:29,074 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,40783,1689938159262] 2023-07-21 11:16:29,074 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,40783,1689938159262; numProcessing=2 2023-07-21 11:16:29,077 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,40783,1689938159262 already deleted, retry=false 2023-07-21 11:16:29,077 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,40783,1689938159262 expired; onlineServers=2 2023-07-21 11:16:29,078 DEBUG [RS:4;jenkins-hbase17:40467] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:29,079 INFO [RS:4;jenkins-hbase17:40467] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C40467%2C1689938170241:(num 1689938170980) 2023-07-21 11:16:29,079 DEBUG [RS:4;jenkins-hbase17:40467] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:29,079 INFO [RS:4;jenkins-hbase17:40467] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:29,079 INFO [RS:4;jenkins-hbase17:40467] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:29,079 INFO [RS:4;jenkins-hbase17:40467] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:29,079 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:29,079 INFO [RS:4;jenkins-hbase17:40467] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:29,079 INFO [RS:4;jenkins-hbase17:40467] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:29,080 INFO [RS:4;jenkins-hbase17:40467] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:40467 2023-07-21 11:16:29,082 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:29,082 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:29,082 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,40467,1689938170241 2023-07-21 11:16:29,082 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,40467,1689938170241] 2023-07-21 11:16:29,082 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,40467,1689938170241; numProcessing=3 2023-07-21 11:16:29,084 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,40467,1689938170241 already deleted, retry=false 2023-07-21 11:16:29,084 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,40467,1689938170241 expired; onlineServers=1 2023-07-21 11:16:29,184 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:29,184 INFO [RS:4;jenkins-hbase17:40467] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,40467,1689938170241; zookeeper connection closed. 2023-07-21 11:16:29,184 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40467-0x10187975688000d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:29,184 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6f5d8a2a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6f5d8a2a 2023-07-21 11:16:29,256 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-21 11:16:29,325 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:29,325 INFO [RS:0;jenkins-hbase17:40783] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,40783,1689938159262; zookeeper connection closed. 2023-07-21 11:16:29,325 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:40783-0x101879756880001, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:29,325 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1af9d91b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1af9d91b 2023-07-21 11:16:29,391 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.68 KB at sequenceid=154 (bloomFilter=false), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/06500b67645f42e6aef9708c4d818841 2023-07-21 11:16:29,399 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 06500b67645f42e6aef9708c4d818841 2023-07-21 11:16:29,414 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=555 B at sequenceid=154 (bloomFilter=false), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/rep_barrier/ce1c3c0335804360b6540dfdf53da436 2023-07-21 11:16:29,421 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce1c3c0335804360b6540dfdf53da436 2023-07-21 11:16:29,425 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:29,425 INFO [RS:3;jenkins-hbase17:37137] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,37137,1689938164928; zookeeper connection closed. 2023-07-21 11:16:29,425 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:37137-0x10187975688000b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:29,425 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3ef5ba75] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3ef5ba75 2023-07-21 11:16:29,432 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.04 KB at sequenceid=154 (bloomFilter=false), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/table/0858982fb8ba4cf8af5d7053ba6f2991 2023-07-21 11:16:29,437 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0858982fb8ba4cf8af5d7053ba6f2991 2023-07-21 11:16:29,438 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/06500b67645f42e6aef9708c4d818841 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/06500b67645f42e6aef9708c4d818841 2023-07-21 11:16:29,444 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 06500b67645f42e6aef9708c4d818841 2023-07-21 11:16:29,445 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/06500b67645f42e6aef9708c4d818841, entries=20, sequenceid=154, filesize=7.1 K 2023-07-21 11:16:29,446 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/rep_barrier/ce1c3c0335804360b6540dfdf53da436 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier/ce1c3c0335804360b6540dfdf53da436 2023-07-21 11:16:29,453 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce1c3c0335804360b6540dfdf53da436 2023-07-21 11:16:29,453 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier/ce1c3c0335804360b6540dfdf53da436, entries=5, sequenceid=154, filesize=5.5 K 2023-07-21 11:16:29,454 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/table/0858982fb8ba4cf8af5d7053ba6f2991 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/0858982fb8ba4cf8af5d7053ba6f2991 2023-07-21 11:16:29,457 DEBUG [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-21 11:16:29,460 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0858982fb8ba4cf8af5d7053ba6f2991 2023-07-21 11:16:29,460 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/0858982fb8ba4cf8af5d7053ba6f2991, entries=10, sequenceid=154, filesize=5.7 K 2023-07-21 11:16:29,461 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~15.27 KB/15632, heapSize ~25.50 KB/26112, currentSize=0 B/0 for 1588230740 in 605ms, sequenceid=154, compaction requested=true 2023-07-21 11:16:29,461 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 11:16:29,479 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/recovered.edits/157.seqid, newMaxSeqId=157, maxSeqId=88 2023-07-21 11:16:29,480 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:29,481 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:16:29,481 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:16:29,481 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 11:16:29,657 INFO [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,39805,1689938159444; all regions closed. 2023-07-21 11:16:29,664 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,39805,1689938159444/jenkins-hbase17.apache.org%2C39805%2C1689938159444.meta.1689938177976.meta not finished, retry = 0 2023-07-21 11:16:29,770 DEBUG [RS:1;jenkins-hbase17:39805] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:29,770 INFO [RS:1;jenkins-hbase17:39805] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C39805%2C1689938159444.meta:.meta(num 1689938177976) 2023-07-21 11:16:29,776 DEBUG [RS:1;jenkins-hbase17:39805] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:29,776 INFO [RS:1;jenkins-hbase17:39805] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C39805%2C1689938159444:(num 1689938162261) 2023-07-21 11:16:29,776 DEBUG [RS:1;jenkins-hbase17:39805] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:29,776 INFO [RS:1;jenkins-hbase17:39805] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:29,776 INFO [RS:1;jenkins-hbase17:39805] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:29,776 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:29,777 INFO [RS:1;jenkins-hbase17:39805] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:39805 2023-07-21 11:16:29,778 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:29,778 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,39805,1689938159444 2023-07-21 11:16:29,779 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,39805,1689938159444] 2023-07-21 11:16:29,779 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,39805,1689938159444; numProcessing=4 2023-07-21 11:16:29,780 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,39805,1689938159444 already deleted, retry=false 2023-07-21 11:16:29,780 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,39805,1689938159444 expired; onlineServers=0 2023-07-21 11:16:29,780 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,41077,1689938157103' ***** 2023-07-21 11:16:29,780 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 11:16:29,781 DEBUG [M:0;jenkins-hbase17:41077] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6dd398f0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:29,781 INFO [M:0;jenkins-hbase17:41077] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:29,782 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 11:16:29,782 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:29,783 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:16:29,783 INFO [M:0;jenkins-hbase17:41077] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@292c560c{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 11:16:29,783 INFO [M:0;jenkins-hbase17:41077] server.AbstractConnector(383): Stopped ServerConnector@296842bc{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:29,783 INFO [M:0;jenkins-hbase17:41077] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:29,784 INFO [M:0;jenkins-hbase17:41077] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@31f3f57b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:29,784 INFO [M:0;jenkins-hbase17:41077] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4eea13c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:29,784 INFO [M:0;jenkins-hbase17:41077] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,41077,1689938157103 2023-07-21 11:16:29,785 INFO [M:0;jenkins-hbase17:41077] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,41077,1689938157103; all regions closed. 2023-07-21 11:16:29,785 DEBUG [M:0;jenkins-hbase17:41077] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:29,785 INFO [M:0;jenkins-hbase17:41077] master.HMaster(1491): Stopping master jetty server 2023-07-21 11:16:29,785 INFO [M:0;jenkins-hbase17:41077] server.AbstractConnector(383): Stopped ServerConnector@24ed8efe{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:29,785 DEBUG [M:0;jenkins-hbase17:41077] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 11:16:29,786 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 11:16:29,786 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938161574] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938161574,5,FailOnTimeoutGroup] 2023-07-21 11:16:29,786 DEBUG [M:0;jenkins-hbase17:41077] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 11:16:29,786 INFO [M:0;jenkins-hbase17:41077] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 11:16:29,786 INFO [M:0;jenkins-hbase17:41077] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 11:16:29,786 INFO [M:0;jenkins-hbase17:41077] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-07-21 11:16:29,786 DEBUG [M:0;jenkins-hbase17:41077] master.HMaster(1512): Stopping service threads 2023-07-21 11:16:29,786 INFO [M:0;jenkins-hbase17:41077] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 11:16:29,786 ERROR [M:0;jenkins-hbase17:41077] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-21 11:16:29,787 INFO [M:0;jenkins-hbase17:41077] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 11:16:29,787 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 11:16:29,787 DEBUG [M:0;jenkins-hbase17:41077] zookeeper.ZKUtil(398): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 11:16:29,787 WARN [M:0;jenkins-hbase17:41077] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 11:16:29,787 INFO [M:0;jenkins-hbase17:41077] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 11:16:29,787 INFO [M:0;jenkins-hbase17:41077] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 11:16:29,788 DEBUG [M:0;jenkins-hbase17:41077] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 11:16:29,788 INFO [M:0;jenkins-hbase17:41077] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:29,788 DEBUG [M:0;jenkins-hbase17:41077] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:29,788 DEBUG [M:0;jenkins-hbase17:41077] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 11:16:29,788 DEBUG [M:0;jenkins-hbase17:41077] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:29,788 INFO [M:0;jenkins-hbase17:41077] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=392.68 KB heapSize=468.49 KB 2023-07-21 11:16:29,788 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938161574] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938161574,5,FailOnTimeoutGroup] 2023-07-21 11:16:29,803 INFO [M:0;jenkins-hbase17:41077] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=392.68 KB at sequenceid=868 (bloomFilter=true), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/cfaa2766a0134ee480cd35adbbbb997d 2023-07-21 11:16:29,810 DEBUG [M:0;jenkins-hbase17:41077] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/cfaa2766a0134ee480cd35adbbbb997d as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/cfaa2766a0134ee480cd35adbbbb997d 2023-07-21 11:16:29,816 INFO [M:0;jenkins-hbase17:41077] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/cfaa2766a0134ee480cd35adbbbb997d, entries=117, sequenceid=868, filesize=26.6 K 2023-07-21 11:16:29,820 INFO [M:0;jenkins-hbase17:41077] regionserver.HRegion(2948): Finished flush of dataSize ~392.68 KB/402105, heapSize ~468.48 KB/479720, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=868, compaction requested=false 2023-07-21 11:16:29,822 INFO [M:0;jenkins-hbase17:41077] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:29,823 DEBUG [M:0;jenkins-hbase17:41077] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:16:29,828 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:29,828 INFO [M:0;jenkins-hbase17:41077] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 11:16:29,829 INFO [M:0;jenkins-hbase17:41077] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:41077 2023-07-21 11:16:29,830 DEBUG [M:0;jenkins-hbase17:41077] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,41077,1689938157103 already deleted, retry=false 2023-07-21 11:16:29,879 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:29,879 INFO [RS:1;jenkins-hbase17:39805] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,39805,1689938159444; zookeeper connection closed. 2023-07-21 11:16:29,879 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:39805-0x101879756880002, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:29,880 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7f5f93e4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7f5f93e4 2023-07-21 11:16:29,880 INFO [Listener at localhost.localdomain/33557] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-21 11:16:29,979 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:29,979 INFO [M:0;jenkins-hbase17:41077] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,41077,1689938157103; zookeeper connection closed. 2023-07-21 11:16:29,980 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:41077-0x101879756880000, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:29,981 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-21 11:16:31,486 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:16:31,982 DEBUG [Listener at localhost.localdomain/33557] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 11:16:31,982 DEBUG [Listener at localhost.localdomain/33557] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 11:16:31,982 DEBUG [Listener at localhost.localdomain/33557] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 11:16:31,982 DEBUG [Listener at localhost.localdomain/33557] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 11:16:31,983 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:16:31,983 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:31,983 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:31,983 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:16:31,983 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:31,983 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:16:31,983 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:16:31,984 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:34157 2023-07-21 11:16:31,985 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:31,985 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:31,986 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34157 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:31,989 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:341570x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:31,990 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34157-0x101879756880010 connected 2023-07-21 11:16:31,991 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:16:31,992 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:31,992 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:16:31,997 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34157 2023-07-21 11:16:31,997 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34157 2023-07-21 11:16:31,997 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34157 2023-07-21 11:16:31,998 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34157 2023-07-21 11:16:31,998 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34157 2023-07-21 11:16:32,000 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:16:32,000 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:16:32,000 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:16:32,000 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 11:16:32,000 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:16:32,000 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:16:32,000 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:16:32,001 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 38715 2023-07-21 11:16:32,001 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:32,003 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:32,003 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@198e553f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:16:32,003 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:32,004 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4a33e4ba{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:16:32,144 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:16:32,145 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:16:32,145 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:16:32,146 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 11:16:32,147 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:32,149 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@67d54367{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-38715-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4490079753856864328/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 11:16:32,150 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@4a4234b6{HTTP/1.1, (http/1.1)}{0.0.0.0:38715} 2023-07-21 11:16:32,151 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @41134ms 2023-07-21 11:16:32,151 INFO [Listener at localhost.localdomain/33557] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae, hbase.cluster.distributed=false 2023-07-21 11:16:32,158 DEBUG [pool-356-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-21 11:16:32,169 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:16:32,169 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:32,169 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:32,169 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:16:32,169 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:32,169 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:16:32,169 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:16:32,171 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:41949 2023-07-21 11:16:32,172 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:16:32,176 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:16:32,177 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:32,179 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:32,181 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41949 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:32,187 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:419490x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:32,187 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:419490x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:16:32,189 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41949-0x101879756880011 connected 2023-07-21 11:16:32,190 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:32,190 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:16:32,191 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41949 2023-07-21 11:16:32,191 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41949 2023-07-21 11:16:32,191 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41949 2023-07-21 11:16:32,192 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41949 2023-07-21 11:16:32,192 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41949 2023-07-21 11:16:32,194 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:16:32,195 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:16:32,195 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:16:32,195 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:16:32,196 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:16:32,196 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:16:32,196 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:16:32,197 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 42039 2023-07-21 11:16:32,197 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:32,201 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:32,201 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@55137119{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:16:32,202 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:32,202 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@69f1364{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:16:32,347 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:16:32,349 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:16:32,349 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:16:32,349 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:16:32,350 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:32,351 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@662417e8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-42039-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8306882438677165732/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:32,353 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@5819e77a{HTTP/1.1, (http/1.1)}{0.0.0.0:42039} 2023-07-21 11:16:32,353 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @41336ms 2023-07-21 11:16:32,366 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:16:32,367 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:32,367 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:32,367 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:16:32,367 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:32,367 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:16:32,367 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:16:32,368 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:43985 2023-07-21 11:16:32,368 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:16:32,370 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:16:32,371 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:32,372 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:32,373 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43985 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:32,376 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:439850x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:32,378 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:439850x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:16:32,379 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43985-0x101879756880012 connected 2023-07-21 11:16:32,379 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:32,379 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:16:32,380 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43985 2023-07-21 11:16:32,380 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43985 2023-07-21 11:16:32,380 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43985 2023-07-21 11:16:32,381 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43985 2023-07-21 11:16:32,381 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43985 2023-07-21 11:16:32,383 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:16:32,383 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:16:32,383 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:16:32,384 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:16:32,384 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:16:32,384 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:16:32,384 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:16:32,385 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 45063 2023-07-21 11:16:32,385 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:32,387 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:32,388 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@8018fa5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:16:32,388 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:32,388 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2151df19{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:16:32,484 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:16:32,485 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:16:32,485 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:16:32,486 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:16:32,488 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:32,489 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2c33fc1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-45063-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5085734155384510550/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:32,490 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@630b0f6{HTTP/1.1, (http/1.1)}{0.0.0.0:45063} 2023-07-21 11:16:32,491 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @41474ms 2023-07-21 11:16:32,500 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:16:32,500 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:32,500 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:32,500 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:16:32,500 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:32,500 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:16:32,500 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:16:32,501 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:43529 2023-07-21 11:16:32,501 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:16:32,502 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:16:32,503 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:32,504 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:32,506 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43529 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:32,511 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:435290x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:32,511 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:435290x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:16:32,514 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:435290x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:32,521 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43529-0x101879756880013 connected 2023-07-21 11:16:32,521 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:16:32,522 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43529 2023-07-21 11:16:32,522 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43529 2023-07-21 11:16:32,522 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43529 2023-07-21 11:16:32,523 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43529 2023-07-21 11:16:32,523 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43529 2023-07-21 11:16:32,526 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:16:32,527 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:16:32,527 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:16:32,528 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:16:32,528 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:16:32,528 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:16:32,528 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:16:32,529 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 41007 2023-07-21 11:16:32,529 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:32,535 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:32,536 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@360b4e2b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:16:32,536 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:32,536 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@521a39d5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:16:32,638 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:16:32,638 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:16:32,638 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:16:32,639 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:16:32,641 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:32,642 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@55edb755{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-41007-hbase-server-2_4_18-SNAPSHOT_jar-_-any-58575949638947621/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:32,644 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@537b3ee7{HTTP/1.1, (http/1.1)}{0.0.0.0:41007} 2023-07-21 11:16:32,644 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @41627ms 2023-07-21 11:16:32,653 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:32,663 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@46a22f4f{HTTP/1.1, (http/1.1)}{0.0.0.0:36105} 2023-07-21 11:16:32,663 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @41646ms 2023-07-21 11:16:32,664 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:32,665 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 11:16:32,665 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:32,669 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:16:32,669 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:16:32,669 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:32,669 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:16:32,671 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:16:32,671 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,34157,1689938191982 from backup master directory 2023-07-21 11:16:32,671 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:32,671 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 11:16:32,671 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:32,671 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:32,672 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:16:32,674 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:16:32,713 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:32,753 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x740b1723 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:32,772 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2880c223, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:32,772 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:32,773 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 11:16:32,773 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:32,780 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,41077,1689938157103 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,41077,1689938157103-dead as it is dead 2023-07-21 11:16:32,782 INFO [master/jenkins-hbase17:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,41077,1689938157103-dead/jenkins-hbase17.apache.org%2C41077%2C1689938157103.1689938160309 2023-07-21 11:16:32,787 INFO [master/jenkins-hbase17:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,41077,1689938157103-dead/jenkins-hbase17.apache.org%2C41077%2C1689938157103.1689938160309 after 5ms 2023-07-21 11:16:32,788 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,41077,1689938157103-dead/jenkins-hbase17.apache.org%2C41077%2C1689938157103.1689938160309 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C41077%2C1689938157103.1689938160309 2023-07-21 11:16:32,788 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,41077,1689938157103-dead 2023-07-21 11:16:32,789 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:32,791 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C34157%2C1689938191982, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,34157,1689938191982, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/oldWALs, maxLogs=10 2023-07-21 11:16:32,812 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:32,813 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:32,814 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:32,817 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,34157,1689938191982/jenkins-hbase17.apache.org%2C34157%2C1689938191982.1689938192791 2023-07-21 11:16:32,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK]] 2023-07-21 11:16:32,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:32,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:32,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:32,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:32,820 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:32,821 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 11:16:32,822 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 11:16:32,827 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/cfaa2766a0134ee480cd35adbbbb997d 2023-07-21 11:16:32,827 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:32,828 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-21 11:16:32,828 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C41077%2C1689938157103.1689938160309 2023-07-21 11:16:32,873 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 1023, firstSequenceIdInLog=3, maxSequenceIdInLog=870, path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C41077%2C1689938157103.1689938160309 2023-07-21 11:16:32,876 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C41077%2C1689938157103.1689938160309 2023-07-21 11:16:32,882 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:32,885 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/870.seqid, newMaxSeqId=870, maxSeqId=1 2023-07-21 11:16:32,886 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=871; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10874450720, jitterRate=0.012762144207954407}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:32,886 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:16:32,887 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 11:16:32,888 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 11:16:32,888 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 11:16:32,888 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 11:16:32,889 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 11:16:32,902 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-21 11:16:32,903 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-21 11:16:32,903 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-21 11:16:32,903 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-21 11:16:32,904 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-21 11:16:32,904 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE 2023-07-21 11:16:32,904 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 11:16:32,905 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=18, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,34719,1689938159621, splitWal=true, meta=false 2023-07-21 11:16:32,905 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=19, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-21 11:16:32,905 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=20, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:32,906 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=23, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:32,906 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=26, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:32,906 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=27, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:32,907 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=48, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:32,907 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=69, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:32,907 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=70, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE 2023-07-21 11:16:32,907 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=71, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 11:16:32,907 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=76, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-21 11:16:32,907 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=77, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:32,908 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=80, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:32,908 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=83, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:32,908 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=84, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:16:32,908 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=85, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:32,908 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=88, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:32,909 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=91, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:32,909 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=92, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:32,909 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=95, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-21 11:16:32,909 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=96, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-21 11:16:32,910 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=97, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689938184318 type: FLUSH version: 2 ttl: 0 ) 2023-07-21 11:16:32,910 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=100, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:32,910 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=103, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:32,910 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=104, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 11:16:32,910 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 11:16:32,911 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=108, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-21 11:16:32,911 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=109, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:32,912 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=110, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:32,912 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=113, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:32,912 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=116, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:32,912 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=117, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 11:16:32,912 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 23 msec 2023-07-21 11:16:32,912 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 11:16:32,917 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-21 11:16:32,917 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase17.apache.org,39805,1689938159444, table=hbase:meta, region=1588230740 2023-07-21 11:16:32,919 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 4 possibly 'live' servers, and 0 'splitting'. 2023-07-21 11:16:32,920 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,39805,1689938159444 already deleted, retry=false 2023-07-21 11:16:32,920 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,39805,1689938159444 on jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:32,921 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=118, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,39805,1689938159444, splitWal=true, meta=true 2023-07-21 11:16:32,921 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=118 for jenkins-hbase17.apache.org,39805,1689938159444 (carryingMeta=true) jenkins-hbase17.apache.org,39805,1689938159444/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@2b2077df[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 11:16:32,922 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,37137,1689938164928 already deleted, retry=false 2023-07-21 11:16:32,922 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,37137,1689938164928 on jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:32,923 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=119, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,37137,1689938164928, splitWal=true, meta=false 2023-07-21 11:16:32,923 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=119 for jenkins-hbase17.apache.org,37137,1689938164928 (carryingMeta=false) jenkins-hbase17.apache.org,37137,1689938164928/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@1ed097a5[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 11:16:32,924 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,40467,1689938170241 already deleted, retry=false 2023-07-21 11:16:32,924 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,40467,1689938170241 on jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:32,925 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,40467,1689938170241, splitWal=true, meta=false 2023-07-21 11:16:32,925 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=120 for jenkins-hbase17.apache.org,40467,1689938170241 (carryingMeta=false) jenkins-hbase17.apache.org,40467,1689938170241/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@417e2892[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 11:16:32,926 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,40783,1689938159262 already deleted, retry=false 2023-07-21 11:16:32,926 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,40783,1689938159262 on jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:32,926 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=121, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,40783,1689938159262, splitWal=true, meta=false 2023-07-21 11:16:32,927 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=121 for jenkins-hbase17.apache.org,40783,1689938159262 (carryingMeta=false) jenkins-hbase17.apache.org,40783,1689938159262/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@62017ed4[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 11:16:32,927 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-21 11:16:32,927 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 11:16:32,928 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 11:16:32,928 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 11:16:32,929 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 11:16:32,930 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 11:16:32,930 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:32,930 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:32,931 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:32,931 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:32,930 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:32,933 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,34157,1689938191982, sessionid=0x101879756880010, setting cluster-up flag (Was=false) 2023-07-21 11:16:32,934 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 11:16:32,935 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:32,937 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 11:16:32,938 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:32,940 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 11:16:32,940 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 11:16:32,942 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-21 11:16:32,943 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:32,943 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 11:16:32,943 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 11:16:32,943 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-21 11:16:32,945 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:32,946 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:39805 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:39805 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:32,946 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:39805 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:39805 2023-07-21 11:16:32,948 INFO [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:32,948 INFO [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:32,949 DEBUG [RS:0;jenkins-hbase17:41949] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:32,949 INFO [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:32,951 DEBUG [RS:1;jenkins-hbase17:43985] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:32,951 DEBUG [RS:2;jenkins-hbase17:43529] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:32,954 DEBUG [RS:0;jenkins-hbase17:41949] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:32,954 DEBUG [RS:0;jenkins-hbase17:41949] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:32,954 DEBUG [RS:2;jenkins-hbase17:43529] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:32,954 DEBUG [RS:2;jenkins-hbase17:43529] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:32,955 DEBUG [RS:0;jenkins-hbase17:41949] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:32,956 DEBUG [RS:2;jenkins-hbase17:43529] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:32,957 DEBUG [RS:0;jenkins-hbase17:41949] zookeeper.ReadOnlyZKClient(139): Connect 0x5e61af85 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:32,957 DEBUG [RS:2;jenkins-hbase17:43529] zookeeper.ReadOnlyZKClient(139): Connect 0x6fcbfeae to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:32,964 DEBUG [RS:1;jenkins-hbase17:43985] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:32,964 DEBUG [RS:1;jenkins-hbase17:43985] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:32,965 DEBUG [RS:1;jenkins-hbase17:43985] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:32,967 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 11:16:32,967 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 11:16:32,967 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 11:16:32,967 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 11:16:32,967 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:16:32,967 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:16:32,967 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:16:32,967 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:16:32,967 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-21 11:16:32,967 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:32,967 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:32,967 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:32,977 DEBUG [RS:1;jenkins-hbase17:43985] zookeeper.ReadOnlyZKClient(139): Connect 0x507ad9dd to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:32,979 DEBUG [RS:0;jenkins-hbase17:41949] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@50a65a39, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:32,979 DEBUG [RS:0;jenkins-hbase17:41949] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5499c4f5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:32,980 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase17.apache.org,39805,1689938159444; numProcessing=1 2023-07-21 11:16:32,980 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=118, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,39805,1689938159444, splitWal=true, meta=true 2023-07-21 11:16:32,981 DEBUG [RS:2;jenkins-hbase17:43529] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@610c0d27, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:32,981 DEBUG [RS:2;jenkins-hbase17:43529] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@653395de, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:32,981 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689938222981 2023-07-21 11:16:32,988 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 11:16:32,990 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 11:16:32,990 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase17.apache.org,37137,1689938164928; numProcessing=2 2023-07-21 11:16:32,990 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=119, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,37137,1689938164928, splitWal=true, meta=false 2023-07-21 11:16:32,991 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 11:16:32,991 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 11:16:32,991 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 11:16:32,991 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 11:16:32,992 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:32,993 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 11:16:32,993 DEBUG [RS:1;jenkins-hbase17:43985] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ebd2da2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:32,993 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 11:16:32,993 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=118, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,39805,1689938159444, splitWal=true, meta=true, isMeta: true 2023-07-21 11:16:32,993 DEBUG [RS:1;jenkins-hbase17:43985] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a82b2ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:32,994 DEBUG [PEWorker-4] master.DeadServer(103): Processing jenkins-hbase17.apache.org,40783,1689938159262; numProcessing=3 2023-07-21 11:16:32,993 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 11:16:32,994 INFO [PEWorker-4] procedure.ServerCrashProcedure(161): Start pid=121, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,40783,1689938159262, splitWal=true, meta=false 2023-07-21 11:16:32,995 DEBUG [PEWorker-3] master.DeadServer(103): Processing jenkins-hbase17.apache.org,40467,1689938170241; numProcessing=4 2023-07-21 11:16:32,995 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=120, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,40467,1689938170241, splitWal=true, meta=false 2023-07-21 11:16:32,996 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 11:16:32,996 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 11:16:33,000 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,39805,1689938159444-splitting 2023-07-21 11:16:33,002 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,39805,1689938159444-splitting dir is empty, no logs to split. 2023-07-21 11:16:33,002 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase17.apache.org,39805,1689938159444 WAL count=0, meta=true 2023-07-21 11:16:33,002 DEBUG [RS:0;jenkins-hbase17:41949] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:41949 2023-07-21 11:16:33,002 INFO [RS:0;jenkins-hbase17:41949] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:33,002 INFO [RS:0;jenkins-hbase17:41949] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:33,002 DEBUG [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:33,002 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938192996,5,FailOnTimeoutGroup] 2023-07-21 11:16:33,003 INFO [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,34157,1689938191982 with isa=jenkins-hbase17.apache.org/136.243.18.41:41949, startcode=1689938192168 2023-07-21 11:16:33,003 DEBUG [RS:0;jenkins-hbase17:41949] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:33,005 DEBUG [RS:1;jenkins-hbase17:43985] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:43985 2023-07-21 11:16:33,005 INFO [RS:1;jenkins-hbase17:43985] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:33,005 INFO [RS:1;jenkins-hbase17:43985] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:33,005 DEBUG [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:33,009 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938193003,5,FailOnTimeoutGroup] 2023-07-21 11:16:33,009 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,010 INFO [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,34157,1689938191982 with isa=jenkins-hbase17.apache.org/136.243.18.41:43985, startcode=1689938192366 2023-07-21 11:16:33,010 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 11:16:33,010 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,010 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,010 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689938193010, completionTime=-1 2023-07-21 11:16:33,010 WARN [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-21 11:16:33,010 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-21 11:16:33,010 DEBUG [RS:1;jenkins-hbase17:43985] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:33,011 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:49685, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:33,012 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,39805,1689938159444-splitting dir is empty, no logs to split. 2023-07-21 11:16:33,012 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase17.apache.org,39805,1689938159444 WAL count=0, meta=true 2023-07-21 11:16:33,012 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,39805,1689938159444 WAL splitting is done? wals=0, meta=true 2023-07-21 11:16:33,014 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:41197, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:33,017 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34157] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:33,017 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:33,018 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=118, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 11:16:33,018 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34157] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:33,019 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 11:16:33,019 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:33,019 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 11:16:33,020 DEBUG [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:33,020 DEBUG [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:33,020 DEBUG [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38715 2023-07-21 11:16:33,020 DEBUG [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:33,021 DEBUG [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:33,020 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=122, ppid=118, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 11:16:33,021 DEBUG [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38715 2023-07-21 11:16:33,021 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:33,022 DEBUG [RS:2;jenkins-hbase17:43529] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:43529 2023-07-21 11:16:33,022 INFO [RS:2;jenkins-hbase17:43529] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:33,023 INFO [RS:2;jenkins-hbase17:43529] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:33,023 DEBUG [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:33,023 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=122, ppid=118, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 11:16:33,024 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,43985,1689938192366] 2023-07-21 11:16:33,024 DEBUG [RS:1;jenkins-hbase17:43985] zookeeper.ZKUtil(162): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:33,024 WARN [RS:1;jenkins-hbase17:43985] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:33,024 INFO [RS:1;jenkins-hbase17:43985] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:33,024 DEBUG [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:33,025 INFO [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,34157,1689938191982 with isa=jenkins-hbase17.apache.org/136.243.18.41:43529, startcode=1689938192499 2023-07-21 11:16:33,026 DEBUG [RS:2;jenkins-hbase17:43529] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:33,024 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,41949,1689938192168] 2023-07-21 11:16:33,024 DEBUG [RS:0;jenkins-hbase17:41949] zookeeper.ZKUtil(162): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:33,027 WARN [RS:0;jenkins-hbase17:41949] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:33,027 INFO [RS:0;jenkins-hbase17:41949] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:33,028 DEBUG [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:33,029 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:54291, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:33,030 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34157] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,43529,1689938192499 2023-07-21 11:16:33,030 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:33,030 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 11:16:33,031 DEBUG [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:33,031 DEBUG [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:33,031 DEBUG [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38715 2023-07-21 11:16:33,033 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:33,033 DEBUG [RS:1;jenkins-hbase17:43985] zookeeper.ZKUtil(162): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43529,1689938192499 2023-07-21 11:16:33,033 DEBUG [RS:2;jenkins-hbase17:43529] zookeeper.ZKUtil(162): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43529,1689938192499 2023-07-21 11:16:33,033 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,43529,1689938192499] 2023-07-21 11:16:33,033 WARN [RS:2;jenkins-hbase17:43529] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:33,033 INFO [RS:2;jenkins-hbase17:43529] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:33,033 DEBUG [RS:0;jenkins-hbase17:41949] zookeeper.ZKUtil(162): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43529,1689938192499 2023-07-21 11:16:33,034 DEBUG [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43529,1689938192499 2023-07-21 11:16:33,034 DEBUG [RS:1;jenkins-hbase17:43985] zookeeper.ZKUtil(162): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:33,034 DEBUG [RS:0;jenkins-hbase17:41949] zookeeper.ZKUtil(162): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:33,034 DEBUG [RS:1;jenkins-hbase17:43985] zookeeper.ZKUtil(162): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:33,034 DEBUG [RS:0;jenkins-hbase17:41949] zookeeper.ZKUtil(162): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:33,036 DEBUG [RS:1;jenkins-hbase17:43985] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:33,036 DEBUG [RS:0;jenkins-hbase17:41949] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:33,037 INFO [RS:1;jenkins-hbase17:43985] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:33,037 INFO [RS:0;jenkins-hbase17:41949] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:33,039 INFO [RS:1;jenkins-hbase17:43985] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:33,039 INFO [RS:1;jenkins-hbase17:43985] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:33,039 INFO [RS:1;jenkins-hbase17:43985] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,040 INFO [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:33,040 INFO [RS:0;jenkins-hbase17:41949] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:33,040 INFO [RS:0;jenkins-hbase17:41949] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:33,041 INFO [RS:0;jenkins-hbase17:41949] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,048 INFO [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:33,049 INFO [RS:1;jenkins-hbase17:43985] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,049 DEBUG [RS:1;jenkins-hbase17:43985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,049 DEBUG [RS:1;jenkins-hbase17:43985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,049 DEBUG [RS:1;jenkins-hbase17:43985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,049 DEBUG [RS:1;jenkins-hbase17:43985] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,049 DEBUG [RS:1;jenkins-hbase17:43985] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,050 DEBUG [RS:1;jenkins-hbase17:43985] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:33,050 DEBUG [RS:1;jenkins-hbase17:43985] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,050 DEBUG [RS:1;jenkins-hbase17:43985] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,050 DEBUG [RS:1;jenkins-hbase17:43985] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,050 DEBUG [RS:1;jenkins-hbase17:43985] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,050 INFO [RS:1;jenkins-hbase17:43985] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,051 INFO [RS:1;jenkins-hbase17:43985] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,051 INFO [RS:1;jenkins-hbase17:43985] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,055 INFO [RS:1;jenkins-hbase17:43985] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,055 INFO [RS:0;jenkins-hbase17:41949] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,055 DEBUG [RS:0;jenkins-hbase17:41949] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,055 DEBUG [RS:0;jenkins-hbase17:41949] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,055 DEBUG [RS:0;jenkins-hbase17:41949] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,055 DEBUG [RS:0;jenkins-hbase17:41949] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,055 DEBUG [RS:0;jenkins-hbase17:41949] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,056 DEBUG [RS:0;jenkins-hbase17:41949] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:33,056 DEBUG [RS:0;jenkins-hbase17:41949] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,056 DEBUG [RS:0;jenkins-hbase17:41949] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,056 DEBUG [RS:0;jenkins-hbase17:41949] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,056 DEBUG [RS:0;jenkins-hbase17:41949] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,056 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:39805 this server is in the failed servers list 2023-07-21 11:16:33,061 INFO [RS:0;jenkins-hbase17:41949] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,061 INFO [RS:0;jenkins-hbase17:41949] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,061 INFO [RS:0;jenkins-hbase17:41949] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,062 INFO [RS:0;jenkins-hbase17:41949] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,061 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=51ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-21 11:16:33,073 DEBUG [RS:2;jenkins-hbase17:43529] zookeeper.ZKUtil(162): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43529,1689938192499 2023-07-21 11:16:33,073 DEBUG [RS:2;jenkins-hbase17:43529] zookeeper.ZKUtil(162): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:33,074 DEBUG [RS:2;jenkins-hbase17:43529] zookeeper.ZKUtil(162): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:33,077 INFO [RS:0;jenkins-hbase17:41949] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:33,077 INFO [RS:0;jenkins-hbase17:41949] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,41949,1689938192168-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,077 DEBUG [RS:2;jenkins-hbase17:43529] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:33,078 INFO [RS:2;jenkins-hbase17:43529] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:33,079 INFO [RS:2;jenkins-hbase17:43529] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:33,080 INFO [RS:2;jenkins-hbase17:43529] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:33,081 INFO [RS:2;jenkins-hbase17:43529] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,083 INFO [RS:1;jenkins-hbase17:43985] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:33,083 INFO [RS:1;jenkins-hbase17:43985] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43985,1689938192366-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,084 INFO [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:33,085 INFO [RS:2;jenkins-hbase17:43529] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,085 DEBUG [RS:2;jenkins-hbase17:43529] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,085 DEBUG [RS:2;jenkins-hbase17:43529] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,086 DEBUG [RS:2;jenkins-hbase17:43529] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,086 DEBUG [RS:2;jenkins-hbase17:43529] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,086 DEBUG [RS:2;jenkins-hbase17:43529] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,086 DEBUG [RS:2;jenkins-hbase17:43529] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:33,086 DEBUG [RS:2;jenkins-hbase17:43529] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,086 DEBUG [RS:2;jenkins-hbase17:43529] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,086 DEBUG [RS:2;jenkins-hbase17:43529] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,086 DEBUG [RS:2;jenkins-hbase17:43529] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:33,088 INFO [RS:2;jenkins-hbase17:43529] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,088 INFO [RS:2;jenkins-hbase17:43529] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,088 INFO [RS:2;jenkins-hbase17:43529] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,088 INFO [RS:2;jenkins-hbase17:43529] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,094 INFO [RS:0;jenkins-hbase17:41949] regionserver.Replication(203): jenkins-hbase17.apache.org,41949,1689938192168 started 2023-07-21 11:16:33,094 INFO [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,41949,1689938192168, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:41949, sessionid=0x101879756880011 2023-07-21 11:16:33,094 DEBUG [RS:0;jenkins-hbase17:41949] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:33,094 DEBUG [RS:0;jenkins-hbase17:41949] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:33,094 DEBUG [RS:0;jenkins-hbase17:41949] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,41949,1689938192168' 2023-07-21 11:16:33,094 DEBUG [RS:0;jenkins-hbase17:41949] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:33,095 DEBUG [RS:0;jenkins-hbase17:41949] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:33,095 DEBUG [RS:0;jenkins-hbase17:41949] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:33,095 DEBUG [RS:0;jenkins-hbase17:41949] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:33,095 DEBUG [RS:0;jenkins-hbase17:41949] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:33,095 DEBUG [RS:0;jenkins-hbase17:41949] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,41949,1689938192168' 2023-07-21 11:16:33,096 DEBUG [RS:0;jenkins-hbase17:41949] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:33,096 DEBUG [RS:0;jenkins-hbase17:41949] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:33,096 DEBUG [RS:0;jenkins-hbase17:41949] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:33,096 INFO [RS:0;jenkins-hbase17:41949] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 11:16:33,099 INFO [RS:0;jenkins-hbase17:41949] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,099 DEBUG [RS:0;jenkins-hbase17:41949] zookeeper.ZKUtil(398): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 11:16:33,099 INFO [RS:0;jenkins-hbase17:41949] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 11:16:33,100 INFO [RS:0;jenkins-hbase17:41949] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,100 INFO [RS:0;jenkins-hbase17:41949] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,101 INFO [RS:2;jenkins-hbase17:43529] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:33,101 INFO [RS:2;jenkins-hbase17:43529] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43529,1689938192499-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,102 INFO [RS:1;jenkins-hbase17:43985] regionserver.Replication(203): jenkins-hbase17.apache.org,43985,1689938192366 started 2023-07-21 11:16:33,102 INFO [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,43985,1689938192366, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:43985, sessionid=0x101879756880012 2023-07-21 11:16:33,105 DEBUG [RS:1;jenkins-hbase17:43985] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:33,105 DEBUG [RS:1;jenkins-hbase17:43985] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:33,105 DEBUG [RS:1;jenkins-hbase17:43985] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,43985,1689938192366' 2023-07-21 11:16:33,105 DEBUG [RS:1;jenkins-hbase17:43985] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:33,105 DEBUG [RS:1;jenkins-hbase17:43985] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:33,106 DEBUG [RS:1;jenkins-hbase17:43985] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:33,106 DEBUG [RS:1;jenkins-hbase17:43985] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:33,106 DEBUG [RS:1;jenkins-hbase17:43985] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:33,106 DEBUG [RS:1;jenkins-hbase17:43985] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,43985,1689938192366' 2023-07-21 11:16:33,106 DEBUG [RS:1;jenkins-hbase17:43985] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:33,106 DEBUG [RS:1;jenkins-hbase17:43985] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:33,106 DEBUG [RS:1;jenkins-hbase17:43985] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:33,106 INFO [RS:1;jenkins-hbase17:43985] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 11:16:33,106 INFO [RS:1;jenkins-hbase17:43985] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,107 DEBUG [RS:1;jenkins-hbase17:43985] zookeeper.ZKUtil(398): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 11:16:33,107 INFO [RS:1;jenkins-hbase17:43985] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 11:16:33,107 INFO [RS:1;jenkins-hbase17:43985] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,107 INFO [RS:1;jenkins-hbase17:43985] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,115 INFO [RS:2;jenkins-hbase17:43529] regionserver.Replication(203): jenkins-hbase17.apache.org,43529,1689938192499 started 2023-07-21 11:16:33,115 INFO [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,43529,1689938192499, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:43529, sessionid=0x101879756880013 2023-07-21 11:16:33,115 DEBUG [RS:2;jenkins-hbase17:43529] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:33,115 DEBUG [RS:2;jenkins-hbase17:43529] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,43529,1689938192499 2023-07-21 11:16:33,115 DEBUG [RS:2;jenkins-hbase17:43529] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,43529,1689938192499' 2023-07-21 11:16:33,115 DEBUG [RS:2;jenkins-hbase17:43529] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:33,116 DEBUG [RS:2;jenkins-hbase17:43529] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:33,116 DEBUG [RS:2;jenkins-hbase17:43529] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:33,116 DEBUG [RS:2;jenkins-hbase17:43529] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:33,116 DEBUG [RS:2;jenkins-hbase17:43529] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,43529,1689938192499 2023-07-21 11:16:33,116 DEBUG [RS:2;jenkins-hbase17:43529] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,43529,1689938192499' 2023-07-21 11:16:33,116 DEBUG [RS:2;jenkins-hbase17:43529] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:33,116 DEBUG [RS:2;jenkins-hbase17:43529] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:33,117 DEBUG [RS:2;jenkins-hbase17:43529] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:33,117 INFO [RS:2;jenkins-hbase17:43529] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 11:16:33,117 INFO [RS:2;jenkins-hbase17:43529] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,117 DEBUG [RS:2;jenkins-hbase17:43529] zookeeper.ZKUtil(398): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 11:16:33,117 INFO [RS:2;jenkins-hbase17:43529] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 11:16:33,117 INFO [RS:2;jenkins-hbase17:43529] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,117 INFO [RS:2;jenkins-hbase17:43529] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:33,173 DEBUG [jenkins-hbase17:34157] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 11:16:33,173 DEBUG [jenkins-hbase17:34157] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:33,174 DEBUG [jenkins-hbase17:34157] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:33,174 DEBUG [jenkins-hbase17:34157] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:33,174 DEBUG [jenkins-hbase17:34157] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:33,174 DEBUG [jenkins-hbase17:34157] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:33,177 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,43529,1689938192499, state=OPENING 2023-07-21 11:16:33,178 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:33,178 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:33,178 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=123, ppid=122, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,43529,1689938192499}] 2023-07-21 11:16:33,205 INFO [RS:0;jenkins-hbase17:41949] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C41949%2C1689938192168, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,41949,1689938192168, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:33,209 INFO [RS:1;jenkins-hbase17:43985] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43985%2C1689938192366, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43985,1689938192366, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:33,221 INFO [RS:2;jenkins-hbase17:43529] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43529%2C1689938192499, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43529,1689938192499, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:33,232 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:33,232 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:33,233 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:33,242 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:33,242 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:33,242 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:33,250 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:33,250 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:33,250 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:33,250 INFO [RS:0;jenkins-hbase17:41949] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,41949,1689938192168/jenkins-hbase17.apache.org%2C41949%2C1689938192168.1689938193206 2023-07-21 11:16:33,250 INFO [RS:1;jenkins-hbase17:43985] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43985,1689938192366/jenkins-hbase17.apache.org%2C43985%2C1689938192366.1689938193210 2023-07-21 11:16:33,251 DEBUG [RS:0;jenkins-hbase17:41949] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK]] 2023-07-21 11:16:33,251 DEBUG [RS:1;jenkins-hbase17:43985] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK]] 2023-07-21 11:16:33,256 INFO [RS:2;jenkins-hbase17:43529] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43529,1689938192499/jenkins-hbase17.apache.org%2C43529%2C1689938192499.1689938193222 2023-07-21 11:16:33,257 DEBUG [RS:2;jenkins-hbase17:43529] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK]] 2023-07-21 11:16:33,261 WARN [ReadOnlyZKClient-127.0.0.1:61077@0x740b1723] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 11:16:33,261 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:33,263 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34666, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:33,263 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43529] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:34666 deadline: 1689938253263, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase17.apache.org,43529,1689938192499 2023-07-21 11:16:33,332 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,43529,1689938192499 2023-07-21 11:16:33,334 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:33,335 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34676, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:33,339 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 11:16:33,339 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:33,341 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43529%2C1689938192499.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43529,1689938192499, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:33,359 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:33,365 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:33,365 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:33,369 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43529,1689938192499/jenkins-hbase17.apache.org%2C43529%2C1689938192499.meta.1689938193342.meta 2023-07-21 11:16:33,370 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK]] 2023-07-21 11:16:33,370 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:33,370 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:16:33,370 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 11:16:33,371 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 11:16:33,371 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 11:16:33,371 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:33,371 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 11:16:33,371 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 11:16:33,373 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:16:33,374 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:33,374 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:33,375 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:16:33,384 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 06500b67645f42e6aef9708c4d818841 2023-07-21 11:16:33,384 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/06500b67645f42e6aef9708c4d818841 2023-07-21 11:16:33,389 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/728cc4f1540e47f282a8d3cbd08b0853 2023-07-21 11:16:33,395 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b65be13c0dc640f9a57e3a19398ea4b9 2023-07-21 11:16:33,395 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/b65be13c0dc640f9a57e3a19398ea4b9 2023-07-21 11:16:33,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:33,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:16:33,396 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:33,396 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:33,397 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:16:33,403 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce1c3c0335804360b6540dfdf53da436 2023-07-21 11:16:33,403 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier/ce1c3c0335804360b6540dfdf53da436 2023-07-21 11:16:33,427 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f8e5cb731248424f9ac24182335eb922 2023-07-21 11:16:33,427 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier/f8e5cb731248424f9ac24182335eb922 2023-07-21 11:16:33,427 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:33,427 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:16:33,428 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:33,428 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:33,429 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:16:33,435 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0858982fb8ba4cf8af5d7053ba6f2991 2023-07-21 11:16:33,436 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/0858982fb8ba4cf8af5d7053ba6f2991 2023-07-21 11:16:33,441 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/47ab354a4780423db7f93e81451f82da 2023-07-21 11:16:33,449 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 53441bb4613b4a9e8e92ee74f2b2633b 2023-07-21 11:16:33,449 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/53441bb4613b4a9e8e92ee74f2b2633b 2023-07-21 11:16:33,449 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:33,450 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:33,451 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:33,454 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:16:33,455 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:16:33,456 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=158; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11666563840, jitterRate=0.08653342723846436}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:16:33,456 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:16:33,457 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=123, masterSystemTime=1689938193332 2023-07-21 11:16:33,460 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 11:16:33,461 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-07-21 11:16:33,462 DEBUG [RS:2;jenkins-hbase17:43529-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 11:16:33,462 DEBUG [RS:2;jenkins-hbase17:43529-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 11:16:33,470 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 11:16:33,471 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 11:16:33,472 DEBUG [RS:2;jenkins-hbase17:43529-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 23187 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 11:16:33,472 DEBUG [RS:2;jenkins-hbase17:43529-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 16944 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 11:16:33,472 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,43529,1689938192499, state=OPEN 2023-07-21 11:16:33,473 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:33,473 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:33,476 DEBUG [RS:2;jenkins-hbase17:43529-longCompactions-0] regionserver.HStore(1912): 1588230740/table is initiating minor compaction (all files) 2023-07-21 11:16:33,476 INFO [RS:2;jenkins-hbase17:43529-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/table in hbase:meta,,1.1588230740 2023-07-21 11:16:33,477 INFO [RS:2;jenkins-hbase17:43529-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/47ab354a4780423db7f93e81451f82da, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/53441bb4613b4a9e8e92ee74f2b2633b, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/0858982fb8ba4cf8af5d7053ba6f2991] into tmpdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp, totalSize=16.5 K 2023-07-21 11:16:33,480 DEBUG [RS:2;jenkins-hbase17:43529-shortCompactions-0] regionserver.HStore(1912): 1588230740/info is initiating minor compaction (all files) 2023-07-21 11:16:33,480 INFO [RS:2;jenkins-hbase17:43529-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/info in hbase:meta,,1.1588230740 2023-07-21 11:16:33,480 INFO [RS:2;jenkins-hbase17:43529-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/728cc4f1540e47f282a8d3cbd08b0853, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/b65be13c0dc640f9a57e3a19398ea4b9, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/06500b67645f42e6aef9708c4d818841] into tmpdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp, totalSize=22.6 K 2023-07-21 11:16:33,482 DEBUG [RS:2;jenkins-hbase17:43529-shortCompactions-0] compactions.Compactor(207): Compacting 728cc4f1540e47f282a8d3cbd08b0853, keycount=21, bloomtype=NONE, size=7.1 K, encoding=NONE, compression=NONE, seqNum=15, earliestPutTs=1689938163260 2023-07-21 11:16:33,483 DEBUG [RS:2;jenkins-hbase17:43529-longCompactions-0] compactions.Compactor(207): Compacting 47ab354a4780423db7f93e81451f82da, keycount=4, bloomtype=NONE, size=4.8 K, encoding=NONE, compression=NONE, seqNum=15, earliestPutTs=1689938163329 2023-07-21 11:16:33,483 DEBUG [RS:2;jenkins-hbase17:43529-shortCompactions-0] compactions.Compactor(207): Compacting b65be13c0dc640f9a57e3a19398ea4b9, keycount=33, bloomtype=NONE, size=8.4 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1689938168537 2023-07-21 11:16:33,483 DEBUG [RS:2;jenkins-hbase17:43529-longCompactions-0] compactions.Compactor(207): Compacting 53441bb4613b4a9e8e92ee74f2b2633b, keycount=13, bloomtype=NONE, size=6.1 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=9223372036854775807 2023-07-21 11:16:33,483 DEBUG [RS:2;jenkins-hbase17:43529-shortCompactions-0] compactions.Compactor(207): Compacting 06500b67645f42e6aef9708c4d818841, keycount=20, bloomtype=NONE, size=7.1 K, encoding=NONE, compression=NONE, seqNum=154, earliestPutTs=1689938178157 2023-07-21 11:16:33,484 DEBUG [RS:2;jenkins-hbase17:43529-longCompactions-0] compactions.Compactor(207): Compacting 0858982fb8ba4cf8af5d7053ba6f2991, keycount=10, bloomtype=NONE, size=5.7 K, encoding=NONE, compression=NONE, seqNum=154, earliestPutTs=9223372036854775807 2023-07-21 11:16:33,488 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=123, resume processing ppid=122 2023-07-21 11:16:33,489 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, ppid=122, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,43529,1689938192499 in 297 msec 2023-07-21 11:16:33,494 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=118 2023-07-21 11:16:33,494 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=118, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 475 msec 2023-07-21 11:16:33,523 INFO [RS:2;jenkins-hbase17:43529-longCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#table#compaction#13 average throughput is 0.18 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 11:16:33,530 INFO [RS:2;jenkins-hbase17:43529-shortCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#info#compaction#14 average throughput is 1.59 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 11:16:33,614 DEBUG [RS:2;jenkins-hbase17:43529-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/table/4749bcea1e764757be2898f2ea93c5d8 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/4749bcea1e764757be2898f2ea93c5d8 2023-07-21 11:16:33,617 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:33,617 WARN [RS-EventLoopGroup-12-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:40467 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:40467 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:33,618 DEBUG [RS-EventLoopGroup-12-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:40467 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:40467 2023-07-21 11:16:33,633 DEBUG [RS:2;jenkins-hbase17:43529-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/3536ab124fb54a2fb8a540fbd6311b09 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/3536ab124fb54a2fb8a540fbd6311b09 2023-07-21 11:16:33,645 DEBUG [RS:2;jenkins-hbase17:43529-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 11:16:33,645 DEBUG [RS:2;jenkins-hbase17:43529-longCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 11:16:33,648 INFO [RS:2;jenkins-hbase17:43529-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/info of 1588230740 into 3536ab124fb54a2fb8a540fbd6311b09(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 11:16:33,648 DEBUG [RS:2;jenkins-hbase17:43529-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-21 11:16:33,648 INFO [RS:2;jenkins-hbase17:43529-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/info, priority=13, startTime=1689938193459; duration=0sec 2023-07-21 11:16:33,649 DEBUG [RS:2;jenkins-hbase17:43529-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 11:16:33,657 INFO [RS:2;jenkins-hbase17:43529-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/table of 1588230740 into 4749bcea1e764757be2898f2ea93c5d8(size=4.9 K), total size for store is 4.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 11:16:33,657 DEBUG [RS:2;jenkins-hbase17:43529-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-21 11:16:33,657 INFO [RS:2;jenkins-hbase17:43529-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/table, priority=13, startTime=1689938193461; duration=0sec 2023-07-21 11:16:33,657 DEBUG [RS:2;jenkins-hbase17:43529-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 11:16:33,740 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:40467 this server is in the failed servers list 2023-07-21 11:16:33,945 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:40467 this server is in the failed servers list 2023-07-21 11:16:34,256 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:40467 this server is in the failed servers list 2023-07-21 11:16:34,580 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1570ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1519ms 2023-07-21 11:16:34,762 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:40467 this server is in the failed servers list 2023-07-21 11:16:35,768 WARN [RS-EventLoopGroup-12-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:40467 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:40467 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:35,770 DEBUG [RS-EventLoopGroup-12-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:40467 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:40467 2023-07-21 11:16:36,083 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3073ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3022ms 2023-07-21 11:16:37,327 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 11:16:37,328 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-07-21 11:16:37,545 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4535ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-21 11:16:37,545 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 11:16:37,548 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=2bd94f497343684e2f5a451c6e430d4d, regionState=OPEN, lastHost=jenkins-hbase17.apache.org,40783,1689938159262, regionLocation=jenkins-hbase17.apache.org,40783,1689938159262, openSeqNum=2 2023-07-21 11:16:37,548 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=2782e41606006289532e239f665ea4eb, regionState=OPEN, lastHost=jenkins-hbase17.apache.org,40467,1689938170241, regionLocation=jenkins-hbase17.apache.org,40467,1689938170241, openSeqNum=41 2023-07-21 11:16:37,548 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 11:16:37,548 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689938257548 2023-07-21 11:16:37,548 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689938317548 2023-07-21 11:16:37,549 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-21 11:16:37,565 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,39805,1689938159444 had 1 regions 2023-07-21 11:16:37,565 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,34157,1689938191982-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:37,566 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,34157,1689938191982-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:37,567 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,40783,1689938159262 had 1 regions 2023-07-21 11:16:37,567 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,37137,1689938164928 had 0 regions 2023-07-21 11:16:37,567 INFO [PEWorker-4] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,40467,1689938170241 had 1 regions 2023-07-21 11:16:37,569 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,34157,1689938191982-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:37,569 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:34157, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:37,569 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:37,570 WARN [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. is NOT online; state={2bd94f497343684e2f5a451c6e430d4d state=OPEN, ts=1689938197548, server=jenkins-hbase17.apache.org,40783,1689938159262}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-21 11:16:37,575 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=121, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,40783,1689938159262, splitWal=true, meta=false, isMeta: false 2023-07-21 11:16:37,575 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=118, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,39805,1689938159444, splitWal=true, meta=true, isMeta: false 2023-07-21 11:16:37,575 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=119, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,37137,1689938164928, splitWal=true, meta=false, isMeta: false 2023-07-21 11:16:37,575 INFO [PEWorker-4] procedure.ServerCrashProcedure(300): Splitting WALs pid=120, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,40467,1689938170241, splitWal=true, meta=false, isMeta: false 2023-07-21 11:16:37,580 WARN [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase17.apache.org,40783,1689938159262/hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d., unknown_server=jenkins-hbase17.apache.org,40467,1689938170241/hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:37,581 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40783,1689938159262-splitting 2023-07-21 11:16:37,583 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40783,1689938159262-splitting dir is empty, no logs to split. 2023-07-21 11:16:37,583 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase17.apache.org,40783,1689938159262 WAL count=0, meta=false 2023-07-21 11:16:37,583 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,39805,1689938159444-splitting dir is empty, no logs to split. 2023-07-21 11:16:37,583 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase17.apache.org,39805,1689938159444 WAL count=0, meta=false 2023-07-21 11:16:37,584 DEBUG [PEWorker-5] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,37137,1689938164928-splitting 2023-07-21 11:16:37,585 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,37137,1689938164928-splitting dir is empty, no logs to split. 2023-07-21 11:16:37,585 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase17.apache.org,37137,1689938164928 WAL count=0, meta=false 2023-07-21 11:16:37,586 DEBUG [PEWorker-4] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40467,1689938170241-splitting 2023-07-21 11:16:37,587 INFO [PEWorker-4] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40467,1689938170241-splitting dir is empty, no logs to split. 2023-07-21 11:16:37,587 INFO [PEWorker-4] master.SplitWALManager(106): jenkins-hbase17.apache.org,40467,1689938170241 WAL count=0, meta=false 2023-07-21 11:16:37,589 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,37137,1689938164928-splitting dir is empty, no logs to split. 2023-07-21 11:16:37,590 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase17.apache.org,37137,1689938164928 WAL count=0, meta=false 2023-07-21 11:16:37,590 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,37137,1689938164928 WAL splitting is done? wals=0, meta=false 2023-07-21 11:16:37,591 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40783,1689938159262-splitting dir is empty, no logs to split. 2023-07-21 11:16:37,591 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase17.apache.org,40783,1689938159262 WAL count=0, meta=false 2023-07-21 11:16:37,591 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,40783,1689938159262 WAL splitting is done? wals=0, meta=false 2023-07-21 11:16:37,593 INFO [PEWorker-5] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,37137,1689938164928 failed, ignore...File hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,37137,1689938164928-splitting does not exist. 2023-07-21 11:16:37,595 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,39805,1689938159444-splitting dir is empty, no logs to split. 2023-07-21 11:16:37,595 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase17.apache.org,39805,1689938159444 WAL count=0, meta=false 2023-07-21 11:16:37,595 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,39805,1689938159444 WAL splitting is done? wals=0, meta=false 2023-07-21 11:16:37,595 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,37137,1689938164928 after splitting done 2023-07-21 11:16:37,595 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase17.apache.org,37137,1689938164928 from processing; numProcessing=3 2023-07-21 11:16:37,595 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,40783,1689938159262 failed, ignore...File hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40783,1689938159262-splitting does not exist. 2023-07-21 11:16:37,595 INFO [PEWorker-4] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40467,1689938170241-splitting dir is empty, no logs to split. 2023-07-21 11:16:37,595 INFO [PEWorker-4] master.SplitWALManager(106): jenkins-hbase17.apache.org,40467,1689938170241 WAL count=0, meta=false 2023-07-21 11:16:37,595 DEBUG [PEWorker-4] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,40467,1689938170241 WAL splitting is done? wals=0, meta=false 2023-07-21 11:16:37,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=121, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, ASSIGN}] 2023-07-21 11:16:37,598 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=119, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,37137,1689938164928, splitWal=true, meta=false in 4.6730 sec 2023-07-21 11:16:37,600 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,39805,1689938159444 after splitting done 2023-07-21 11:16:37,600 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=121, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, ASSIGN 2023-07-21 11:16:37,600 INFO [PEWorker-4] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,40467,1689938170241 failed, ignore...File hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,40467,1689938170241-splitting does not exist. 2023-07-21 11:16:37,600 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase17.apache.org,39805,1689938159444 from processing; numProcessing=2 2023-07-21 11:16:37,605 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=121, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 11:16:37,605 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, ASSIGN}] 2023-07-21 11:16:37,607 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=125, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, ASSIGN 2023-07-21 11:16:37,608 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,39805,1689938159444, splitWal=true, meta=true in 4.6840 sec 2023-07-21 11:16:37,608 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=125, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 11:16:37,608 DEBUG [jenkins-hbase17:34157] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 11:16:37,609 DEBUG [jenkins-hbase17:34157] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:37,609 DEBUG [jenkins-hbase17:34157] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:37,609 DEBUG [jenkins-hbase17:34157] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:37,609 DEBUG [jenkins-hbase17:34157] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:37,609 DEBUG [jenkins-hbase17:34157] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-21 11:16:37,611 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=2bd94f497343684e2f5a451c6e430d4d, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:37,612 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938197611"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938197611"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938197611"}]},"ts":"1689938197611"} 2023-07-21 11:16:37,613 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=125 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:37,614 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938197613"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938197613"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938197613"}]},"ts":"1689938197613"} 2023-07-21 11:16:37,614 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=126, ppid=124, state=RUNNABLE; OpenRegionProcedure 2bd94f497343684e2f5a451c6e430d4d, server=jenkins-hbase17.apache.org,43985,1689938192366}] 2023-07-21 11:16:37,615 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=125, state=RUNNABLE; OpenRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,41949,1689938192168}] 2023-07-21 11:16:37,769 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:37,769 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:37,769 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:37,770 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:37,773 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:48278, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:37,778 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:60860, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:37,798 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:37,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2bd94f497343684e2f5a451c6e430d4d, NAME => 'hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:37,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:37,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:37,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:37,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:37,801 WARN [RS-EventLoopGroup-12-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:40467 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:40467 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:37,801 DEBUG [RS-EventLoopGroup-12-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:40467 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:40467 2023-07-21 11:16:37,808 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=46, started=4218 ms ago, cancelled=false, msg=Call to address=jenkins-hbase17.apache.org/136.243.18.41:40467 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:40467, details=row '\x00' on table 'hbase:rsgroup' at region=hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb., hostname=jenkins-hbase17.apache.org,40467,1689938170241, seqNum=41, see https://s.apache.org/timeout, exception=java.net.ConnectException: Call to address=jenkins-hbase17.apache.org/136.243.18.41:40467 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:40467 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:186) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:99) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:398) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:368) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:396) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:195) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$300(NettyRpcConnection.java:76) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:296) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:287) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:674) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:693) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:40467 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:37,810 INFO [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:37,810 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:37,810 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2782e41606006289532e239f665ea4eb, NAME => 'hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:37,810 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:16:37,811 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. service=MultiRowMutationService 2023-07-21 11:16:37,811 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 11:16:37,811 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:37,811 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:37,811 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:37,811 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:37,811 DEBUG [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info 2023-07-21 11:16:37,812 DEBUG [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info 2023-07-21 11:16:37,812 INFO [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2bd94f497343684e2f5a451c6e430d4d columnFamilyName info 2023-07-21 11:16:37,813 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:37,814 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m 2023-07-21 11:16:37,814 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m 2023-07-21 11:16:37,815 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2782e41606006289532e239f665ea4eb columnFamilyName m 2023-07-21 11:16:37,830 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for db07fdd1032644e6999e588b237b5bc3 2023-07-21 11:16:37,831 DEBUG [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info/db07fdd1032644e6999e588b237b5bc3 2023-07-21 11:16:37,831 INFO [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] regionserver.HStore(310): Store=2bd94f497343684e2f5a451c6e430d4d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:37,831 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0fb9bf38ccef403bbe61f4b8544ca472 2023-07-21 11:16:37,831 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/0fb9bf38ccef403bbe61f4b8544ca472 2023-07-21 11:16:37,832 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:37,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:37,838 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/14fcb2495f27487ba67ba2d3cfa299f7 2023-07-21 11:16:37,840 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:37,842 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2bd94f497343684e2f5a451c6e430d4d; next sequenceid=15; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11312055520, jitterRate=0.05351726710796356}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:37,842 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2bd94f497343684e2f5a451c6e430d4d: 2023-07-21 11:16:37,843 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d., pid=126, masterSystemTime=1689938197769 2023-07-21 11:16:37,850 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=2bd94f497343684e2f5a451c6e430d4d, regionState=OPEN, openSeqNum=15, regionLocation=jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:37,850 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3e23205333ea45fca4f644908fd8226c 2023-07-21 11:16:37,850 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938197850"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938197850"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938197850"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938197850"}]},"ts":"1689938197850"} 2023-07-21 11:16:37,851 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/3e23205333ea45fca4f644908fd8226c 2023-07-21 11:16:37,852 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(310): Store=2782e41606006289532e239f665ea4eb/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:37,852 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:37,854 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:37,864 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:37,866 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=126, resume processing ppid=124 2023-07-21 11:16:37,866 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=126, ppid=124, state=SUCCESS; OpenRegionProcedure 2bd94f497343684e2f5a451c6e430d4d, server=jenkins-hbase17.apache.org,43985,1689938192366 in 238 msec 2023-07-21 11:16:37,868 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=121 2023-07-21 11:16:37,868 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,40783,1689938159262 after splitting done 2023-07-21 11:16:37,869 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=121, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, ASSIGN in 269 msec 2023-07-21 11:16:37,869 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase17.apache.org,40783,1689938159262 from processing; numProcessing=1 2023-07-21 11:16:37,870 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:37,873 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=121, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,40783,1689938159262, splitWal=true, meta=false in 4.9430 sec 2023-07-21 11:16:37,874 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:37,875 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2782e41606006289532e239f665ea4eb; next sequenceid=83; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6dded926, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:37,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:37,876 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb., pid=127, masterSystemTime=1689938197769 2023-07-21 11:16:37,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 11:16:37,877 DEBUG [RS:0;jenkins-hbase17:41949-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 11:16:37,878 DEBUG [RS:0;jenkins-hbase17:41949-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 16271 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 11:16:37,878 DEBUG [RS:0;jenkins-hbase17:41949-shortCompactions-0] regionserver.HStore(1912): 2782e41606006289532e239f665ea4eb/m is initiating minor compaction (all files) 2023-07-21 11:16:37,878 INFO [RS:0;jenkins-hbase17:41949-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 2782e41606006289532e239f665ea4eb/m in hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:37,878 INFO [RS:0;jenkins-hbase17:41949-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/14fcb2495f27487ba67ba2d3cfa299f7, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/0fb9bf38ccef403bbe61f4b8544ca472, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/3e23205333ea45fca4f644908fd8226c] into tmpdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp, totalSize=15.9 K 2023-07-21 11:16:37,879 DEBUG [RS:0;jenkins-hbase17:41949-shortCompactions-0] compactions.Compactor(207): Compacting 14fcb2495f27487ba67ba2d3cfa299f7, keycount=3, bloomtype=ROW, size=5.1 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1689938166041 2023-07-21 11:16:37,879 DEBUG [RS:0;jenkins-hbase17:41949-shortCompactions-0] compactions.Compactor(207): Compacting 0fb9bf38ccef403bbe61f4b8544ca472, keycount=10, bloomtype=ROW, size=5.4 K, encoding=NONE, compression=NONE, seqNum=37, earliestPutTs=1689938176494 2023-07-21 11:16:37,880 DEBUG [RS:0;jenkins-hbase17:41949-shortCompactions-0] compactions.Compactor(207): Compacting 3e23205333ea45fca4f644908fd8226c, keycount=14, bloomtype=ROW, size=5.5 K, encoding=NONE, compression=NONE, seqNum=79, earliestPutTs=1689938188801 2023-07-21 11:16:37,886 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:37,888 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:37,889 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=125 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=OPEN, openSeqNum=83, regionLocation=jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:37,889 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938197889"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938197889"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938197889"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938197889"}]},"ts":"1689938197889"} 2023-07-21 11:16:37,898 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=125 2023-07-21 11:16:37,898 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=125, state=SUCCESS; OpenRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,41949,1689938192168 in 280 msec 2023-07-21 11:16:37,902 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=120 2023-07-21 11:16:37,902 INFO [PEWorker-4] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,40467,1689938170241 after splitting done 2023-07-21 11:16:37,902 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, ASSIGN in 293 msec 2023-07-21 11:16:37,902 DEBUG [PEWorker-4] master.DeadServer(114): Removed jenkins-hbase17.apache.org,40467,1689938170241 from processing; numProcessing=0 2023-07-21 11:16:37,904 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,40467,1689938170241, splitWal=true, meta=false in 4.9780 sec 2023-07-21 11:16:37,916 INFO [RS:0;jenkins-hbase17:41949-shortCompactions-0] throttle.PressureAwareThroughputController(145): 2782e41606006289532e239f665ea4eb#m#compaction#15 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 11:16:37,935 DEBUG [RS:0;jenkins-hbase17:41949-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/aeb270fc9f7943c29e25e4ef55952a60 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/aeb270fc9f7943c29e25e4ef55952a60 2023-07-21 11:16:37,944 DEBUG [RS:0;jenkins-hbase17:41949-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 11:16:37,945 INFO [RS:0;jenkins-hbase17:41949-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 2782e41606006289532e239f665ea4eb/m of 2782e41606006289532e239f665ea4eb into aeb270fc9f7943c29e25e4ef55952a60(size=5.1 K), total size for store is 5.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 11:16:37,945 DEBUG [RS:0;jenkins-hbase17:41949-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:37,945 INFO [RS:0;jenkins-hbase17:41949-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb., storeName=2782e41606006289532e239f665ea4eb/m, priority=13, startTime=1689938197876; duration=0sec 2023-07-21 11:16:37,945 DEBUG [RS:0;jenkins-hbase17:41949-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 11:16:38,571 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-21 11:16:38,587 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:38,589 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:49674, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:38,603 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 11:16:38,604 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 11:16:38,604 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.932sec 2023-07-21 11:16:38,608 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-21 11:16:38,608 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:38,610 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=128, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-21 11:16:38,610 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-21 11:16:38,613 INFO [master/jenkins-hbase17:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-21 11:16:38,614 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=128, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 11:16:38,615 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=128, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 11:16:38,617 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/hbase/quota/77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:38,617 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/hbase/quota/77ef890485c37098a66e3a9a030a0490 empty. 2023-07-21 11:16:38,618 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/hbase/quota/77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:38,618 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-21 11:16:38,626 INFO [master/jenkins-hbase17:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-21 11:16:38,626 INFO [master/jenkins-hbase17:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-21 11:16:38,629 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:38,630 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:38,630 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 11:16:38,630 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 11:16:38,630 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,34157,1689938191982-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 11:16:38,630 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,34157,1689938191982-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 11:16:38,631 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 11:16:38,637 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-21 11:16:38,638 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 77ef890485c37098a66e3a9a030a0490, NAME => 'hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.tmp 2023-07-21 11:16:38,653 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:38,653 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(139): Connect 0x70c9e445 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:38,653 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 77ef890485c37098a66e3a9a030a0490, disabling compactions & flushes 2023-07-21 11:16:38,653 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:38,653 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:38,653 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. after waiting 0 ms 2023-07-21 11:16:38,653 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:38,654 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:38,654 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 77ef890485c37098a66e3a9a030a0490: 2023-07-21 11:16:38,660 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3f95cb49, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:38,661 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=128, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 11:16:38,662 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689938198661"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938198661"}]},"ts":"1689938198661"} 2023-07-21 11:16:38,664 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 11:16:38,666 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=128, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 11:16:38,666 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938198666"}]},"ts":"1689938198666"} 2023-07-21 11:16:38,667 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-21 11:16:38,670 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:38,670 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:38,670 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:38,670 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:38,670 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:38,670 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=129, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=77ef890485c37098a66e3a9a030a0490, ASSIGN}] 2023-07-21 11:16:38,673 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=77ef890485c37098a66e3a9a030a0490, ASSIGN 2023-07-21 11:16:38,673 DEBUG [hconnection-0x7f94b04e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:38,674 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=129, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=77ef890485c37098a66e3a9a030a0490, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,41949,1689938192168; forceNewPlan=false, retain=false 2023-07-21 11:16:38,675 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:50788, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:38,684 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-21 11:16:38,684 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x70c9e445 to 127.0.0.1:61077 2023-07-21 11:16:38,684 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:38,686 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase17.apache.org:34157 after: jenkins-hbase17.apache.org:34157 2023-07-21 11:16:38,687 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(139): Connect 0x69f18b5f to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:38,693 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@333d50ef, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:38,693 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:38,824 INFO [jenkins-hbase17:34157] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 11:16:38,826 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=77ef890485c37098a66e3a9a030a0490, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:38,826 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689938198826"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938198826"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938198826"}]},"ts":"1689938198826"} 2023-07-21 11:16:38,829 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; OpenRegionProcedure 77ef890485c37098a66e3a9a030a0490, server=jenkins-hbase17.apache.org,41949,1689938192168}] 2023-07-21 11:16:38,878 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:16:38,990 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:38,990 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 77ef890485c37098a66e3a9a030a0490, NAME => 'hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:38,990 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:38,990 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:38,991 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:38,991 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:38,992 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:38,995 DEBUG [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/q 2023-07-21 11:16:38,995 DEBUG [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/q 2023-07-21 11:16:38,995 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 77ef890485c37098a66e3a9a030a0490 columnFamilyName q 2023-07-21 11:16:38,996 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] regionserver.HStore(310): Store=77ef890485c37098a66e3a9a030a0490/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:38,996 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:38,998 DEBUG [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/u 2023-07-21 11:16:38,998 DEBUG [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/u 2023-07-21 11:16:38,998 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 77ef890485c37098a66e3a9a030a0490 columnFamilyName u 2023-07-21 11:16:38,999 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] regionserver.HStore(310): Store=77ef890485c37098a66e3a9a030a0490/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:39,000 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:39,000 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:39,002 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-21 11:16:39,004 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:39,006 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 11:16:39,007 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 77ef890485c37098a66e3a9a030a0490; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10425707040, jitterRate=-0.029030367732048035}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-21 11:16:39,007 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 77ef890485c37098a66e3a9a030a0490: 2023-07-21 11:16:39,008 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490., pid=130, masterSystemTime=1689938198984 2023-07-21 11:16:39,010 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:39,010 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:39,010 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=77ef890485c37098a66e3a9a030a0490, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:39,011 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689938199010"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938199010"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938199010"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938199010"}]},"ts":"1689938199010"} 2023-07-21 11:16:39,014 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-21 11:16:39,015 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; OpenRegionProcedure 77ef890485c37098a66e3a9a030a0490, server=jenkins-hbase17.apache.org,41949,1689938192168 in 183 msec 2023-07-21 11:16:39,016 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=129, resume processing ppid=128 2023-07-21 11:16:39,016 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=129, ppid=128, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=77ef890485c37098a66e3a9a030a0490, ASSIGN in 345 msec 2023-07-21 11:16:39,017 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=128, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 11:16:39,017 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689938199017"}]},"ts":"1689938199017"} 2023-07-21 11:16:39,018 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-21 11:16:39,020 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=128, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 11:16:39,022 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=128, state=SUCCESS; CreateTableProcedure table=hbase:quota in 411 msec 2023-07-21 11:16:39,037 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 11:16:39,037 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-21 11:16:41,858 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:41,870 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45850, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:41,871 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 11:16:41,871 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 11:16:41,897 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:41,897 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:41,897 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 11:16:41,898 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:41,899 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-21 11:16:41,899 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,34157,1689938191982] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 11:16:41,927 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:42720, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 11:16:41,928 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-21 11:16:41,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34157] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=false 2023-07-21 11:16:41,931 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(139): Connect 0x1121a5df to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:41,954 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@344c62cf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:41,954 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:41,955 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [90,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:41,958 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:41,960 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:41,961 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:50792, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:41,966 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10187975688001b connected 2023-07-21 11:16:41,981 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-21 11:16:41,982 INFO [Listener at localhost.localdomain/33557] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 11:16:41,982 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x69f18b5f to 127.0.0.1:61077 2023-07-21 11:16:41,982 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:41,982 DEBUG [Listener at localhost.localdomain/33557] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 11:16:41,982 DEBUG [Listener at localhost.localdomain/33557] util.JVMClusterUtil(257): Found active master hash=23482100, stopped=false 2023-07-21 11:16:41,982 DEBUG [Listener at localhost.localdomain/33557] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 11:16:41,982 DEBUG [Listener at localhost.localdomain/33557] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 11:16:41,982 DEBUG [Listener at localhost.localdomain/33557] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 11:16:41,982 INFO [Listener at localhost.localdomain/33557] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:41,983 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:41,983 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:41,984 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:41,984 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:41,984 INFO [Listener at localhost.localdomain/33557] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 11:16:41,985 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:41,986 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x740b1723 to 127.0.0.1:61077 2023-07-21 11:16:41,986 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:41,986 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:41,986 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:41,986 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:41,986 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:41,987 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,41949,1689938192168' ***** 2023-07-21 11:16:41,987 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:16:41,987 INFO [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:41,988 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,43985,1689938192366' ***** 2023-07-21 11:16:41,996 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:16:41,997 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,43529,1689938192499' ***** 2023-07-21 11:16:41,997 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:16:41,997 INFO [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:42,001 INFO [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:42,004 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:42,013 INFO [RS:2;jenkins-hbase17:43529] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@55edb755{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:42,013 INFO [RS:1;jenkins-hbase17:43985] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2c33fc1{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:42,013 INFO [RS:0;jenkins-hbase17:41949] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@662417e8{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:42,013 INFO [RS:2;jenkins-hbase17:43529] server.AbstractConnector(383): Stopped ServerConnector@537b3ee7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:42,013 INFO [RS:1;jenkins-hbase17:43985] server.AbstractConnector(383): Stopped ServerConnector@630b0f6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:42,014 INFO [RS:2;jenkins-hbase17:43529] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:42,014 INFO [RS:0;jenkins-hbase17:41949] server.AbstractConnector(383): Stopped ServerConnector@5819e77a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:42,014 INFO [RS:0;jenkins-hbase17:41949] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:42,014 INFO [RS:0;jenkins-hbase17:41949] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@69f1364{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:42,014 INFO [RS:0;jenkins-hbase17:41949] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@55137119{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:42,014 INFO [RS:1;jenkins-hbase17:43985] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:42,014 INFO [RS:2;jenkins-hbase17:43529] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@521a39d5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:42,015 INFO [RS:1;jenkins-hbase17:43985] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2151df19{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:42,015 INFO [RS:2;jenkins-hbase17:43529] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@360b4e2b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:42,015 INFO [RS:1;jenkins-hbase17:43985] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@8018fa5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:42,015 INFO [RS:0;jenkins-hbase17:41949] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:42,015 INFO [RS:0;jenkins-hbase17:41949] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:42,016 INFO [RS:0;jenkins-hbase17:41949] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:42,016 INFO [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(3305): Received CLOSE for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:42,016 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:42,017 INFO [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(3305): Received CLOSE for 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:42,017 INFO [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:42,017 DEBUG [RS:0;jenkins-hbase17:41949] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5e61af85 to 127.0.0.1:61077 2023-07-21 11:16:42,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2782e41606006289532e239f665ea4eb, disabling compactions & flushes 2023-07-21 11:16:42,017 DEBUG [RS:0;jenkins-hbase17:41949] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:42,017 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:42,017 INFO [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 11:16:42,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:42,017 DEBUG [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1478): Online Regions={2782e41606006289532e239f665ea4eb=hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb., 77ef890485c37098a66e3a9a030a0490=hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.} 2023-07-21 11:16:42,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. after waiting 0 ms 2023-07-21 11:16:42,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:42,021 DEBUG [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1504): Waiting on 2782e41606006289532e239f665ea4eb, 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:42,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 2782e41606006289532e239f665ea4eb 1/1 column families, dataSize=245 B heapSize=656 B 2023-07-21 11:16:42,021 INFO [RS:1;jenkins-hbase17:43985] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:42,021 INFO [RS:1;jenkins-hbase17:43985] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:42,021 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:42,021 INFO [RS:1;jenkins-hbase17:43985] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:42,022 INFO [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(3305): Received CLOSE for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:42,023 INFO [RS:2;jenkins-hbase17:43529] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:42,023 INFO [RS:2;jenkins-hbase17:43529] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:42,023 INFO [RS:2;jenkins-hbase17:43529] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:42,023 INFO [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,43529,1689938192499 2023-07-21 11:16:42,023 DEBUG [RS:2;jenkins-hbase17:43529] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6fcbfeae to 127.0.0.1:61077 2023-07-21 11:16:42,023 DEBUG [RS:2;jenkins-hbase17:43529] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:42,023 INFO [RS:2;jenkins-hbase17:43529] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:42,023 INFO [RS:2;jenkins-hbase17:43529] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:42,023 INFO [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:42,023 DEBUG [RS:1;jenkins-hbase17:43985] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x507ad9dd to 127.0.0.1:61077 2023-07-21 11:16:42,023 DEBUG [RS:1;jenkins-hbase17:43985] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:42,023 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:42,023 INFO [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 11:16:42,023 DEBUG [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1478): Online Regions={2bd94f497343684e2f5a451c6e430d4d=hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.} 2023-07-21 11:16:42,023 DEBUG [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1504): Waiting on 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:42,024 INFO [RS:2;jenkins-hbase17:43529] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:42,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2bd94f497343684e2f5a451c6e430d4d, disabling compactions & flushes 2023-07-21 11:16:42,024 INFO [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 11:16:42,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:42,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:42,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. after waiting 0 ms 2023-07-21 11:16:42,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:42,025 INFO [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 11:16:42,025 DEBUG [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-21 11:16:42,025 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:16:42,025 DEBUG [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-21 11:16:42,025 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:16:42,025 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:16:42,025 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:16:42,025 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:16:42,027 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.06 KB heapSize=5.87 KB 2023-07-21 11:16:42,043 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=14 2023-07-21 11:16:42,049 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:42,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2bd94f497343684e2f5a451c6e430d4d: 2023-07-21 11:16:42,050 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:42,051 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 11:16:42,051 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 11:16:42,052 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.97 KB at sequenceid=171 (bloomFilter=false), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/5c902cb369004c06a80ca0785e879dc9 2023-07-21 11:16:42,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=245 B at sequenceid=87 (bloomFilter=true), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/caeb8cb159f544518af404b183b96da3 2023-07-21 11:16:42,069 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:42,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/caeb8cb159f544518af404b183b96da3 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/caeb8cb159f544518af404b183b96da3 2023-07-21 11:16:42,081 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:42,091 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=86 B at sequenceid=171 (bloomFilter=false), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/table/176c58e30866445dac88d784f537577a 2023-07-21 11:16:42,093 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/caeb8cb159f544518af404b183b96da3, entries=2, sequenceid=87, filesize=5.0 K 2023-07-21 11:16:42,094 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~245 B/245, heapSize ~640 B/640, currentSize=0 B/0 for 2782e41606006289532e239f665ea4eb in 73ms, sequenceid=87, compaction requested=false 2023-07-21 11:16:42,101 DEBUG [StoreCloser-hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/14fcb2495f27487ba67ba2d3cfa299f7, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/0fb9bf38ccef403bbe61f4b8544ca472, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/3e23205333ea45fca4f644908fd8226c] to archive 2023-07-21 11:16:42,102 DEBUG [StoreCloser-hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-21 11:16:42,103 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 11:16:42,103 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 11:16:42,108 DEBUG [StoreCloser-hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/14fcb2495f27487ba67ba2d3cfa299f7 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/14fcb2495f27487ba67ba2d3cfa299f7 2023-07-21 11:16:42,109 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/5c902cb369004c06a80ca0785e879dc9 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/5c902cb369004c06a80ca0785e879dc9 2023-07-21 11:16:42,112 DEBUG [StoreCloser-hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/0fb9bf38ccef403bbe61f4b8544ca472 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/0fb9bf38ccef403bbe61f4b8544ca472 2023-07-21 11:16:42,114 DEBUG [StoreCloser-hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/3e23205333ea45fca4f644908fd8226c to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/3e23205333ea45fca4f644908fd8226c 2023-07-21 11:16:42,123 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/5c902cb369004c06a80ca0785e879dc9, entries=26, sequenceid=171, filesize=7.7 K 2023-07-21 11:16:42,124 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/table/176c58e30866445dac88d784f537577a as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/176c58e30866445dac88d784f537577a 2023-07-21 11:16:42,132 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/176c58e30866445dac88d784f537577a, entries=2, sequenceid=171, filesize=4.7 K 2023-07-21 11:16:42,134 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.06 KB/3132, heapSize ~5.59 KB/5720, currentSize=0 B/0 for 1588230740 in 108ms, sequenceid=171, compaction requested=false 2023-07-21 11:16:42,146 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 11:16:42,146 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 11:16:42,157 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/728cc4f1540e47f282a8d3cbd08b0853, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/b65be13c0dc640f9a57e3a19398ea4b9, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/06500b67645f42e6aef9708c4d818841] to archive 2023-07-21 11:16:42,158 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-21 11:16:42,160 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/728cc4f1540e47f282a8d3cbd08b0853 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/meta/1588230740/info/728cc4f1540e47f282a8d3cbd08b0853 2023-07-21 11:16:42,161 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/b65be13c0dc640f9a57e3a19398ea4b9 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/meta/1588230740/info/b65be13c0dc640f9a57e3a19398ea4b9 2023-07-21 11:16:42,163 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/06500b67645f42e6aef9708c4d818841 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/meta/1588230740/info/06500b67645f42e6aef9708c4d818841 2023-07-21 11:16:42,189 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/47ab354a4780423db7f93e81451f82da, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/53441bb4613b4a9e8e92ee74f2b2633b, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/0858982fb8ba4cf8af5d7053ba6f2991] to archive 2023-07-21 11:16:42,190 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-21 11:16:42,194 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/recovered.edits/90.seqid, newMaxSeqId=90, maxSeqId=82 2023-07-21 11:16:42,194 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/47ab354a4780423db7f93e81451f82da to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/meta/1588230740/table/47ab354a4780423db7f93e81451f82da 2023-07-21 11:16:42,194 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:42,195 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:42,196 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:42,196 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:42,196 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 77ef890485c37098a66e3a9a030a0490, disabling compactions & flushes 2023-07-21 11:16:42,197 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:42,197 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:42,197 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. after waiting 0 ms 2023-07-21 11:16:42,197 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:42,198 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/53441bb4613b4a9e8e92ee74f2b2633b to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/meta/1588230740/table/53441bb4613b4a9e8e92ee74f2b2633b 2023-07-21 11:16:42,199 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/0858982fb8ba4cf8af5d7053ba6f2991 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/meta/1588230740/table/0858982fb8ba4cf8af5d7053ba6f2991 2023-07-21 11:16:42,200 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 11:16:42,207 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:42,208 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 77ef890485c37098a66e3a9a030a0490: 2023-07-21 11:16:42,208 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:42,213 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/recovered.edits/174.seqid, newMaxSeqId=174, maxSeqId=157 2023-07-21 11:16:42,213 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:42,214 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:16:42,214 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:16:42,215 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 11:16:42,221 INFO [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,41949,1689938192168; all regions closed. 2023-07-21 11:16:42,221 DEBUG [RS:0;jenkins-hbase17:41949] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 11:16:42,224 INFO [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,43985,1689938192366; all regions closed. 2023-07-21 11:16:42,224 DEBUG [RS:1;jenkins-hbase17:43985] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 11:16:42,225 INFO [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,43529,1689938192499; all regions closed. 2023-07-21 11:16:42,225 DEBUG [RS:2;jenkins-hbase17:43529] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 11:16:42,233 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43529,1689938192499/jenkins-hbase17.apache.org%2C43529%2C1689938192499.meta.1689938193342.meta not finished, retry = 0 2023-07-21 11:16:42,233 DEBUG [RS:0;jenkins-hbase17:41949] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:42,233 INFO [RS:0;jenkins-hbase17:41949] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C41949%2C1689938192168:(num 1689938193206) 2023-07-21 11:16:42,233 DEBUG [RS:0;jenkins-hbase17:41949] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:42,233 INFO [RS:0;jenkins-hbase17:41949] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:42,234 INFO [RS:0;jenkins-hbase17:41949] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:42,234 INFO [RS:0;jenkins-hbase17:41949] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:42,234 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:42,234 INFO [RS:0;jenkins-hbase17:41949] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:42,234 INFO [RS:0;jenkins-hbase17:41949] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:42,235 INFO [RS:0;jenkins-hbase17:41949] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:41949 2023-07-21 11:16:42,238 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:42,238 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:42,238 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:42,238 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:42,238 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41949,1689938192168 2023-07-21 11:16:42,238 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:42,238 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:42,238 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,41949,1689938192168] 2023-07-21 11:16:42,239 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,41949,1689938192168; numProcessing=1 2023-07-21 11:16:42,239 DEBUG [RS:1;jenkins-hbase17:43985] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:42,239 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,41949,1689938192168 already deleted, retry=false 2023-07-21 11:16:42,239 INFO [RS:1;jenkins-hbase17:43985] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C43985%2C1689938192366:(num 1689938193210) 2023-07-21 11:16:42,239 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,41949,1689938192168 expired; onlineServers=2 2023-07-21 11:16:42,239 DEBUG [RS:1;jenkins-hbase17:43985] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:42,239 INFO [RS:1;jenkins-hbase17:43985] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:42,240 INFO [RS:1;jenkins-hbase17:43985] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:42,240 INFO [RS:1;jenkins-hbase17:43985] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:42,240 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:42,240 INFO [RS:1;jenkins-hbase17:43985] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:42,240 INFO [RS:1;jenkins-hbase17:43985] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:42,240 INFO [RS:1;jenkins-hbase17:43985] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:43985 2023-07-21 11:16:42,243 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:42,243 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:42,243 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,43985,1689938192366 2023-07-21 11:16:42,244 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,43985,1689938192366] 2023-07-21 11:16:42,244 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,43985,1689938192366; numProcessing=2 2023-07-21 11:16:42,245 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,43985,1689938192366 already deleted, retry=false 2023-07-21 11:16:42,245 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,43985,1689938192366 expired; onlineServers=1 2023-07-21 11:16:42,335 DEBUG [RS:2;jenkins-hbase17:43529] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:42,335 INFO [RS:2;jenkins-hbase17:43529] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C43529%2C1689938192499.meta:.meta(num 1689938193342) 2023-07-21 11:16:42,346 DEBUG [RS:2;jenkins-hbase17:43529] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:42,346 INFO [RS:2;jenkins-hbase17:43529] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C43529%2C1689938192499:(num 1689938193222) 2023-07-21 11:16:42,347 DEBUG [RS:2;jenkins-hbase17:43529] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:42,347 INFO [RS:2;jenkins-hbase17:43529] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:42,347 INFO [RS:2;jenkins-hbase17:43529] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:42,347 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:42,348 INFO [RS:2;jenkins-hbase17:43529] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:43529 2023-07-21 11:16:42,351 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:42,351 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,43529,1689938192499 2023-07-21 11:16:42,352 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,43529,1689938192499] 2023-07-21 11:16:42,353 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,43529,1689938192499; numProcessing=3 2023-07-21 11:16:42,354 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,43529,1689938192499 already deleted, retry=false 2023-07-21 11:16:42,354 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,43529,1689938192499 expired; onlineServers=0 2023-07-21 11:16:42,354 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,34157,1689938191982' ***** 2023-07-21 11:16:42,354 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 11:16:42,355 DEBUG [M:0;jenkins-hbase17:34157] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@559da00, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:42,355 INFO [M:0;jenkins-hbase17:34157] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:42,355 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 11:16:42,357 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:42,357 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:16:42,358 INFO [M:0;jenkins-hbase17:34157] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@67d54367{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 11:16:42,358 INFO [M:0;jenkins-hbase17:34157] server.AbstractConnector(383): Stopped ServerConnector@4a4234b6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:42,358 INFO [M:0;jenkins-hbase17:34157] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:42,358 INFO [M:0;jenkins-hbase17:34157] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4a33e4ba{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:42,358 INFO [M:0;jenkins-hbase17:34157] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@198e553f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:42,359 INFO [M:0;jenkins-hbase17:34157] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,34157,1689938191982 2023-07-21 11:16:42,359 INFO [M:0;jenkins-hbase17:34157] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,34157,1689938191982; all regions closed. 2023-07-21 11:16:42,359 DEBUG [M:0;jenkins-hbase17:34157] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:42,359 INFO [M:0;jenkins-hbase17:34157] master.HMaster(1491): Stopping master jetty server 2023-07-21 11:16:42,362 INFO [M:0;jenkins-hbase17:34157] server.AbstractConnector(383): Stopped ServerConnector@46a22f4f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:42,364 DEBUG [M:0;jenkins-hbase17:34157] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 11:16:42,364 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 11:16:42,364 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938193003] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938193003,5,FailOnTimeoutGroup] 2023-07-21 11:16:42,364 DEBUG [M:0;jenkins-hbase17:34157] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 11:16:42,365 INFO [M:0;jenkins-hbase17:34157] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 11:16:42,365 INFO [M:0;jenkins-hbase17:34157] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 11:16:42,366 INFO [M:0;jenkins-hbase17:34157] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:42,366 DEBUG [M:0;jenkins-hbase17:34157] master.HMaster(1512): Stopping service threads 2023-07-21 11:16:42,366 INFO [M:0;jenkins-hbase17:34157] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 11:16:42,366 ERROR [M:0;jenkins-hbase17:34157] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 11:16:42,366 INFO [M:0;jenkins-hbase17:34157] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 11:16:42,367 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 11:16:42,364 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938192996] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938192996,5,FailOnTimeoutGroup] 2023-07-21 11:16:42,368 DEBUG [M:0;jenkins-hbase17:34157] zookeeper.ZKUtil(398): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 11:16:42,368 WARN [M:0;jenkins-hbase17:34157] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 11:16:42,368 INFO [M:0;jenkins-hbase17:34157] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 11:16:42,369 INFO [M:0;jenkins-hbase17:34157] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 11:16:42,370 DEBUG [M:0;jenkins-hbase17:34157] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 11:16:42,370 INFO [M:0;jenkins-hbase17:34157] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:42,370 DEBUG [M:0;jenkins-hbase17:34157] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:42,370 DEBUG [M:0;jenkins-hbase17:34157] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 11:16:42,370 DEBUG [M:0;jenkins-hbase17:34157] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:42,370 INFO [M:0;jenkins-hbase17:34157] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=45.39 KB heapSize=54.99 KB 2023-07-21 11:16:42,401 INFO [M:0;jenkins-hbase17:34157] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=45.39 KB at sequenceid=982 (bloomFilter=true), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b9ae8d3117b749bd84d9a26739806a06 2023-07-21 11:16:42,407 DEBUG [M:0;jenkins-hbase17:34157] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b9ae8d3117b749bd84d9a26739806a06 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b9ae8d3117b749bd84d9a26739806a06 2023-07-21 11:16:42,415 INFO [M:0;jenkins-hbase17:34157] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b9ae8d3117b749bd84d9a26739806a06, entries=13, sequenceid=982, filesize=7.2 K 2023-07-21 11:16:42,416 INFO [M:0;jenkins-hbase17:34157] regionserver.HRegion(2948): Finished flush of dataSize ~45.39 KB/46482, heapSize ~54.98 KB/56296, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 46ms, sequenceid=982, compaction requested=false 2023-07-21 11:16:42,419 INFO [M:0;jenkins-hbase17:34157] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:42,419 DEBUG [M:0;jenkins-hbase17:34157] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:16:42,424 INFO [M:0;jenkins-hbase17:34157] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 11:16:42,424 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:42,425 INFO [M:0;jenkins-hbase17:34157] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:34157 2023-07-21 11:16:42,427 DEBUG [M:0;jenkins-hbase17:34157] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,34157,1689938191982 already deleted, retry=false 2023-07-21 11:16:42,505 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:42,505 INFO [RS:2;jenkins-hbase17:43529] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,43529,1689938192499; zookeeper connection closed. 2023-07-21 11:16:42,506 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43529-0x101879756880013, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:42,506 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3226094e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3226094e 2023-07-21 11:16:42,606 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:42,606 INFO [RS:1;jenkins-hbase17:43985] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,43985,1689938192366; zookeeper connection closed. 2023-07-21 11:16:42,606 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:43985-0x101879756880012, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:42,606 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7fe1b35f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7fe1b35f 2023-07-21 11:16:42,706 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:42,706 INFO [RS:0;jenkins-hbase17:41949] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,41949,1689938192168; zookeeper connection closed. 2023-07-21 11:16:42,706 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:41949-0x101879756880011, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:42,706 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@677c0e53] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@677c0e53 2023-07-21 11:16:42,707 INFO [Listener at localhost.localdomain/33557] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-21 11:16:42,806 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:42,806 INFO [M:0;jenkins-hbase17:34157] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,34157,1689938191982; zookeeper connection closed. 2023-07-21 11:16:42,806 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:34157-0x101879756880010, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:42,807 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-21 11:16:43,524 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(712): Not transferring queue since we are shutting down 2023-07-21 11:16:44,494 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:16:44,809 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:16:44,809 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:44,809 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:44,809 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:16:44,809 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:44,810 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:16:44,810 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:16:44,810 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:38633 2023-07-21 11:16:44,811 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:44,812 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:44,813 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38633 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:44,933 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:386330x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:44,937 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38633-0x10187975688001c connected 2023-07-21 11:16:44,961 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:16:44,961 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:44,961 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:16:44,962 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38633 2023-07-21 11:16:44,962 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38633 2023-07-21 11:16:44,962 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38633 2023-07-21 11:16:44,962 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38633 2023-07-21 11:16:44,963 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38633 2023-07-21 11:16:44,966 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:16:44,966 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:16:44,966 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:16:44,967 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 11:16:44,967 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:16:44,967 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:16:44,967 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:16:44,968 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 40421 2023-07-21 11:16:44,968 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:44,970 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:44,970 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10d43f4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:16:44,971 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:44,971 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7d48f97{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:16:45,082 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:16:45,083 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:16:45,084 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:16:45,084 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 11:16:45,086 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:45,087 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5357b32e{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-40421-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6256918060989345782/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 11:16:45,088 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@1907347{HTTP/1.1, (http/1.1)}{0.0.0.0:40421} 2023-07-21 11:16:45,089 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @54072ms 2023-07-21 11:16:45,089 INFO [Listener at localhost.localdomain/33557] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae, hbase.cluster.distributed=false 2023-07-21 11:16:45,091 DEBUG [pool-528-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-21 11:16:45,106 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:16:45,106 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:45,106 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:45,106 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:16:45,106 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:45,106 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:16:45,106 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:16:45,108 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:33343 2023-07-21 11:16:45,109 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:16:45,110 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:16:45,111 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:45,111 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:45,112 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33343 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:45,115 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:333430x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:45,116 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:333430x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:16:45,117 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33343-0x10187975688001d connected 2023-07-21 11:16:45,117 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:45,118 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:16:45,118 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33343 2023-07-21 11:16:45,119 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33343 2023-07-21 11:16:45,119 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33343 2023-07-21 11:16:45,124 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33343 2023-07-21 11:16:45,126 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33343 2023-07-21 11:16:45,129 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:16:45,129 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:16:45,129 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:16:45,130 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:16:45,130 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:16:45,130 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:16:45,130 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:16:45,131 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 45593 2023-07-21 11:16:45,131 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:45,145 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:45,145 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@43c8f9b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:16:45,145 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:45,146 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@30cceb92{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:16:45,226 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(712): Not transferring queue since we are shutting down 2023-07-21 11:16:45,246 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:16:45,247 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:16:45,248 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:16:45,248 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 11:16:45,249 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:45,250 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6f5782aa{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-45593-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3620040417358950810/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:45,253 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@3e322de{HTTP/1.1, (http/1.1)}{0.0.0.0:45593} 2023-07-21 11:16:45,253 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @54236ms 2023-07-21 11:16:45,270 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:16:45,270 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:45,270 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:45,270 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:16:45,271 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:45,271 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:16:45,271 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:16:45,272 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:34931 2023-07-21 11:16:45,272 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:16:45,273 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:16:45,274 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:45,275 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:45,277 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34931 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:45,280 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:349310x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:45,281 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:349310x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:16:45,282 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34931-0x10187975688001e connected 2023-07-21 11:16:45,282 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:45,284 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:16:45,288 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34931 2023-07-21 11:16:45,288 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34931 2023-07-21 11:16:45,289 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34931 2023-07-21 11:16:45,290 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34931 2023-07-21 11:16:45,290 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34931 2023-07-21 11:16:45,293 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:16:45,293 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:16:45,293 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:16:45,294 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:16:45,294 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:16:45,294 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:16:45,295 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:16:45,295 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 41201 2023-07-21 11:16:45,295 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:45,300 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:45,301 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5874c5e3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:16:45,301 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:45,301 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3e013486{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:16:45,396 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:16:45,397 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:16:45,397 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:16:45,397 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:16:45,398 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:45,398 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@55e2133f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-41201-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8859261369238957203/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:45,399 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@2cc75ae4{HTTP/1.1, (http/1.1)}{0.0.0.0:41201} 2023-07-21 11:16:45,400 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @54383ms 2023-07-21 11:16:45,409 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:16:45,409 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:45,409 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:45,410 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:16:45,410 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:45,410 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:16:45,410 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:16:45,411 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:35473 2023-07-21 11:16:45,411 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:16:45,413 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:16:45,414 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:45,415 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:45,417 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35473 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:45,426 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:354730x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:45,435 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:354730x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:16:45,436 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35473-0x10187975688001f connected 2023-07-21 11:16:45,437 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:45,439 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:16:45,440 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35473 2023-07-21 11:16:45,441 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35473 2023-07-21 11:16:45,441 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35473 2023-07-21 11:16:45,442 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35473 2023-07-21 11:16:45,442 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35473 2023-07-21 11:16:45,444 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:16:45,444 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:16:45,444 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:16:45,444 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:16:45,444 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:16:45,444 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:16:45,445 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:16:45,445 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 43393 2023-07-21 11:16:45,445 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:45,449 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:45,449 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@674a6b4a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:16:45,450 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:45,450 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1a20ea9e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:16:45,545 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:16:45,545 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:16:45,545 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:16:45,546 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 11:16:45,546 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:45,547 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3f207495{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-43393-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4042326118051245226/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:45,548 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@212a25a6{HTTP/1.1, (http/1.1)}{0.0.0.0:43393} 2023-07-21 11:16:45,548 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @54531ms 2023-07-21 11:16:45,551 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:45,559 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@50b8409{HTTP/1.1, (http/1.1)}{0.0.0.0:36623} 2023-07-21 11:16:45,559 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @54542ms 2023-07-21 11:16:45,559 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:45,560 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 11:16:45,561 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:45,561 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:16:45,561 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:16:45,562 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:45,562 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:16:45,562 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 11:16:45,564 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:16:45,568 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,38633,1689938204808 from backup master directory 2023-07-21 11:16:45,568 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:16:45,569 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:45,569 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:45,569 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 11:16:45,569 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:45,586 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:45,636 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5e6dce7c to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:45,648 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@450cc9e2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:45,648 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 11:16:45,649 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 11:16:45,649 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:45,656 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,34157,1689938191982 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,34157,1689938191982-dead as it is dead 2023-07-21 11:16:45,656 INFO [master/jenkins-hbase17:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,34157,1689938191982-dead/jenkins-hbase17.apache.org%2C34157%2C1689938191982.1689938192791 2023-07-21 11:16:45,658 INFO [master/jenkins-hbase17:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,34157,1689938191982-dead/jenkins-hbase17.apache.org%2C34157%2C1689938191982.1689938192791 after 2ms 2023-07-21 11:16:45,659 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,34157,1689938191982-dead/jenkins-hbase17.apache.org%2C34157%2C1689938191982.1689938192791 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C34157%2C1689938191982.1689938192791 2023-07-21 11:16:45,659 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,34157,1689938191982-dead 2023-07-21 11:16:45,660 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:45,662 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38633%2C1689938204808, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,38633,1689938204808, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/oldWALs, maxLogs=10 2023-07-21 11:16:45,689 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:45,690 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:45,693 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:45,701 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/WALs/jenkins-hbase17.apache.org,38633,1689938204808/jenkins-hbase17.apache.org%2C38633%2C1689938204808.1689938205662 2023-07-21 11:16:45,703 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK]] 2023-07-21 11:16:45,704 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:45,704 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:45,704 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:45,704 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:45,710 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:45,712 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 11:16:45,712 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 11:16:45,730 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b9ae8d3117b749bd84d9a26739806a06 2023-07-21 11:16:45,734 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/cfaa2766a0134ee480cd35adbbbb997d 2023-07-21 11:16:45,734 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:45,735 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-21 11:16:45,735 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C34157%2C1689938191982.1689938192791 2023-07-21 11:16:45,740 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 128, firstSequenceIdInLog=872, maxSequenceIdInLog=984, path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C34157%2C1689938191982.1689938192791 2023-07-21 11:16:45,741 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C34157%2C1689938191982.1689938192791 2023-07-21 11:16:45,745 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 11:16:45,748 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/984.seqid, newMaxSeqId=984, maxSeqId=870 2023-07-21 11:16:45,749 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=985; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11109361760, jitterRate=0.03463993966579437}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:45,749 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:16:45,752 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 11:16:45,754 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 11:16:45,754 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 11:16:45,754 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 11:16:45,754 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 11:16:45,766 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-21 11:16:45,767 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-21 11:16:45,767 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-21 11:16:45,767 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-21 11:16:45,767 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-21 11:16:45,768 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE 2023-07-21 11:16:45,768 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 11:16:45,768 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=18, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,34719,1689938159621, splitWal=true, meta=false 2023-07-21 11:16:45,768 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=19, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-21 11:16:45,768 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=20, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:45,769 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=23, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:45,769 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=26, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 11:16:45,770 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=27, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:45,770 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=48, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:45,770 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=69, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 11:16:45,771 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=70, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE 2023-07-21 11:16:45,771 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=71, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 11:16:45,771 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=76, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-21 11:16:45,772 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=77, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:45,772 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=80, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:45,772 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=83, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 11:16:45,772 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=84, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 11:16:45,773 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=85, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:45,773 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=88, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:45,773 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=91, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 11:16:45,774 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=92, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:45,774 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=95, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-21 11:16:45,774 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=96, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-21 11:16:45,775 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=97, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689938184318 type: FLUSH version: 2 ttl: 0 ) 2023-07-21 11:16:45,775 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=100, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:45,775 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=103, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 11:16:45,775 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=104, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 11:16:45,775 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 11:16:45,776 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=108, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-21 11:16:45,776 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=109, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:45,776 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=110, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:45,776 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=113, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:45,777 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=116, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 11:16:45,777 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=117, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 11:16:45,777 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=118, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,39805,1689938159444, splitWal=true, meta=true 2023-07-21 11:16:45,777 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=119, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,37137,1689938164928, splitWal=true, meta=false 2023-07-21 11:16:45,777 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=120, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,40467,1689938170241, splitWal=true, meta=false 2023-07-21 11:16:45,777 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=121, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,40783,1689938159262, splitWal=true, meta=false 2023-07-21 11:16:45,778 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=128, state=SUCCESS; CreateTableProcedure table=hbase:quota 2023-07-21 11:16:45,778 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 23 msec 2023-07-21 11:16:45,778 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 11:16:45,779 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-21 11:16:45,779 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase17.apache.org,43529,1689938192499, table=hbase:meta, region=1588230740 2023-07-21 11:16:45,781 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 3 possibly 'live' servers, and 0 'splitting'. 2023-07-21 11:16:45,782 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,43985,1689938192366 already deleted, retry=false 2023-07-21 11:16:45,782 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,43985,1689938192366 on jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:45,783 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=131, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,43985,1689938192366, splitWal=true, meta=false 2023-07-21 11:16:45,783 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=131 for jenkins-hbase17.apache.org,43985,1689938192366 (carryingMeta=false) jenkins-hbase17.apache.org,43985,1689938192366/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@6e62065e[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 11:16:45,784 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,43529,1689938192499 already deleted, retry=false 2023-07-21 11:16:45,784 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,43529,1689938192499 on jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:45,786 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,43529,1689938192499, splitWal=true, meta=true 2023-07-21 11:16:45,786 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=132 for jenkins-hbase17.apache.org,43529,1689938192499 (carryingMeta=true) jenkins-hbase17.apache.org,43529,1689938192499/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@1d28ad99[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 11:16:45,787 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,41949,1689938192168 already deleted, retry=false 2023-07-21 11:16:45,787 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,41949,1689938192168 on jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:45,788 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=133, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,41949,1689938192168, splitWal=true, meta=false 2023-07-21 11:16:45,788 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=133 for jenkins-hbase17.apache.org,41949,1689938192168 (carryingMeta=false) jenkins-hbase17.apache.org,41949,1689938192168/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@10c86c8f[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 11:16:45,788 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-21 11:16:45,789 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 11:16:45,789 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 11:16:45,790 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 11:16:45,790 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 11:16:45,791 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 11:16:45,791 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:45,791 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:45,791 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:45,792 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:45,792 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:45,795 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,38633,1689938204808, sessionid=0x10187975688001c, setting cluster-up flag (Was=false) 2023-07-21 11:16:45,796 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 11:16:45,797 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:45,798 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 11:16:45,799 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:45,800 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/.hbase-snapshot/.tmp 2023-07-21 11:16:45,800 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 11:16:45,800 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 11:16:45,801 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-21 11:16:45,804 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 11:16:45,805 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:45,805 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 11:16:45,809 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:45,810 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:43529 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:43529 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:45,812 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:43529 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:43529 2023-07-21 11:16:45,820 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 11:16:45,820 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 11:16:45,821 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 11:16:45,821 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 11:16:45,821 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:16:45,821 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:16:45,821 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:16:45,821 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 11:16:45,821 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-21 11:16:45,821 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,821 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:45,821 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,833 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689938235833 2023-07-21 11:16:45,833 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 11:16:45,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 11:16:45,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 11:16:45,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 11:16:45,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 11:16:45,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 11:16:45,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,835 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 11:16:45,835 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 11:16:45,835 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 11:16:45,837 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase17.apache.org,43985,1689938192366; numProcessing=1 2023-07-21 11:16:45,837 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=131, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,43985,1689938192366, splitWal=true, meta=false 2023-07-21 11:16:45,837 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase17.apache.org,43529,1689938192499; numProcessing=2 2023-07-21 11:16:45,838 DEBUG [PEWorker-3] master.DeadServer(103): Processing jenkins-hbase17.apache.org,41949,1689938192168; numProcessing=3 2023-07-21 11:16:45,838 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=133, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,41949,1689938192168, splitWal=true, meta=false 2023-07-21 11:16:45,838 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=132, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,43529,1689938192499, splitWal=true, meta=true 2023-07-21 11:16:45,838 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 11:16:45,838 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 11:16:45,838 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938205838,5,FailOnTimeoutGroup] 2023-07-21 11:16:45,838 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938205838,5,FailOnTimeoutGroup] 2023-07-21 11:16:45,838 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,839 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 11:16:45,839 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,839 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,839 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689938205839, completionTime=-1 2023-07-21 11:16:45,839 WARN [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-21 11:16:45,839 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-21 11:16:45,840 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=132, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,43529,1689938192499, splitWal=true, meta=true, isMeta: true 2023-07-21 11:16:45,843 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43529,1689938192499-splitting 2023-07-21 11:16:45,844 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43529,1689938192499-splitting dir is empty, no logs to split. 2023-07-21 11:16:45,844 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase17.apache.org,43529,1689938192499 WAL count=0, meta=true 2023-07-21 11:16:45,851 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43529,1689938192499-splitting dir is empty, no logs to split. 2023-07-21 11:16:45,851 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase17.apache.org,43529,1689938192499 WAL count=0, meta=true 2023-07-21 11:16:45,851 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,43529,1689938192499 WAL splitting is done? wals=0, meta=true 2023-07-21 11:16:45,851 INFO [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:45,851 INFO [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:45,851 DEBUG [RS:0;jenkins-hbase17:33343] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:45,851 DEBUG [RS:1;jenkins-hbase17:34931] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:45,852 INFO [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:45,852 DEBUG [RS:2;jenkins-hbase17:35473] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:45,852 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 11:16:45,853 DEBUG [RS:0;jenkins-hbase17:33343] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:45,853 DEBUG [RS:1;jenkins-hbase17:34931] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:45,853 DEBUG [RS:1;jenkins-hbase17:34931] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:45,853 DEBUG [RS:0;jenkins-hbase17:33343] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:45,855 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 11:16:45,857 DEBUG [RS:2;jenkins-hbase17:35473] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:45,857 DEBUG [RS:2;jenkins-hbase17:35473] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:45,857 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 11:16:45,858 DEBUG [RS:1;jenkins-hbase17:34931] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:45,860 DEBUG [RS:1;jenkins-hbase17:34931] zookeeper.ReadOnlyZKClient(139): Connect 0x773c5dbd to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:45,863 DEBUG [RS:0;jenkins-hbase17:33343] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:45,866 DEBUG [RS:0;jenkins-hbase17:33343] zookeeper.ReadOnlyZKClient(139): Connect 0x2f4c2130 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:45,869 DEBUG [RS:2;jenkins-hbase17:35473] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:45,882 DEBUG [RS:2;jenkins-hbase17:35473] zookeeper.ReadOnlyZKClient(139): Connect 0x34e57acf to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:45,885 DEBUG [RS:1;jenkins-hbase17:34931] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@79fbad2a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:45,885 DEBUG [RS:0;jenkins-hbase17:33343] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@218bdaa9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:45,885 DEBUG [RS:1;jenkins-hbase17:34931] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7dc436cd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:45,885 DEBUG [RS:0;jenkins-hbase17:33343] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25b44a40, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:45,891 DEBUG [RS:2;jenkins-hbase17:35473] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3a229223, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:45,891 DEBUG [RS:2;jenkins-hbase17:35473] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3e1d11bf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:45,899 DEBUG [RS:1;jenkins-hbase17:34931] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:34931 2023-07-21 11:16:45,899 INFO [RS:1;jenkins-hbase17:34931] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:45,899 INFO [RS:1;jenkins-hbase17:34931] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:45,899 DEBUG [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:45,899 DEBUG [RS:0;jenkins-hbase17:33343] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:33343 2023-07-21 11:16:45,900 INFO [RS:0;jenkins-hbase17:33343] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:45,900 INFO [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,38633,1689938204808 with isa=jenkins-hbase17.apache.org/136.243.18.41:34931, startcode=1689938205269 2023-07-21 11:16:45,900 INFO [RS:0;jenkins-hbase17:33343] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:45,900 DEBUG [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:45,900 DEBUG [RS:1;jenkins-hbase17:34931] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:45,900 INFO [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,38633,1689938204808 with isa=jenkins-hbase17.apache.org/136.243.18.41:33343, startcode=1689938205105 2023-07-21 11:16:45,900 DEBUG [RS:0;jenkins-hbase17:33343] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:45,901 DEBUG [RS:2;jenkins-hbase17:35473] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:35473 2023-07-21 11:16:45,901 INFO [RS:2;jenkins-hbase17:35473] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:45,901 INFO [RS:2;jenkins-hbase17:35473] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:45,901 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:33331, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:45,901 DEBUG [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:45,902 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:37675, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:45,902 INFO [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,38633,1689938204808 with isa=jenkins-hbase17.apache.org/136.243.18.41:35473, startcode=1689938205409 2023-07-21 11:16:45,902 DEBUG [RS:2;jenkins-hbase17:35473] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:45,903 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35499, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:45,905 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38633] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:45,905 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:45,906 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 11:16:45,906 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38633] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:45,906 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:45,906 DEBUG [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:45,906 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 11:16:45,906 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38633] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:45,906 DEBUG [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:45,906 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:45,906 DEBUG [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40421 2023-07-21 11:16:45,907 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 11:16:45,907 DEBUG [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:45,907 DEBUG [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:45,907 DEBUG [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40421 2023-07-21 11:16:45,907 DEBUG [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:45,907 DEBUG [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:45,907 DEBUG [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40421 2023-07-21 11:16:45,907 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:45,910 DEBUG [RS:2;jenkins-hbase17:35473] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:45,910 DEBUG [RS:0;jenkins-hbase17:33343] zookeeper.ZKUtil(162): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:45,910 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,34931,1689938205269] 2023-07-21 11:16:45,910 DEBUG [RS:1;jenkins-hbase17:34931] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:45,910 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,33343,1689938205105] 2023-07-21 11:16:45,910 WARN [RS:0;jenkins-hbase17:33343] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:45,910 WARN [RS:2;jenkins-hbase17:35473] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:45,910 INFO [RS:0;jenkins-hbase17:33343] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:45,910 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,35473,1689938205409] 2023-07-21 11:16:45,910 WARN [RS:1;jenkins-hbase17:34931] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:45,910 INFO [RS:2;jenkins-hbase17:35473] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:45,911 INFO [RS:1;jenkins-hbase17:34931] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:45,911 DEBUG [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:45,911 DEBUG [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:45,911 DEBUG [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:45,919 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:43529 this server is in the failed servers list 2023-07-21 11:16:45,922 DEBUG [RS:1;jenkins-hbase17:34931] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:45,928 DEBUG [RS:1;jenkins-hbase17:34931] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:45,932 DEBUG [RS:2;jenkins-hbase17:35473] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:45,932 DEBUG [RS:1;jenkins-hbase17:34931] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:45,932 DEBUG [RS:0;jenkins-hbase17:33343] zookeeper.ZKUtil(162): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:45,932 DEBUG [RS:2;jenkins-hbase17:35473] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:45,933 DEBUG [RS:2;jenkins-hbase17:35473] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:45,933 DEBUG [RS:1;jenkins-hbase17:34931] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:45,933 INFO [RS:1;jenkins-hbase17:34931] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:45,934 DEBUG [RS:0;jenkins-hbase17:33343] zookeeper.ZKUtil(162): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:45,936 DEBUG [RS:0;jenkins-hbase17:33343] zookeeper.ZKUtil(162): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:45,936 DEBUG [RS:2;jenkins-hbase17:35473] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:45,937 DEBUG [RS:0;jenkins-hbase17:33343] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:45,937 INFO [RS:2;jenkins-hbase17:35473] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:45,937 INFO [RS:0;jenkins-hbase17:33343] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:45,940 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=101ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-21 11:16:45,940 INFO [RS:1;jenkins-hbase17:34931] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:45,943 INFO [RS:0;jenkins-hbase17:33343] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:45,944 INFO [RS:1;jenkins-hbase17:34931] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:45,944 INFO [RS:1;jenkins-hbase17:34931] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,944 INFO [RS:0;jenkins-hbase17:33343] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:45,944 INFO [RS:2;jenkins-hbase17:35473] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:45,944 INFO [RS:0;jenkins-hbase17:33343] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,948 INFO [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:45,949 INFO [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:45,954 INFO [RS:2;jenkins-hbase17:35473] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:45,954 INFO [RS:2;jenkins-hbase17:35473] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,955 INFO [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:45,955 INFO [RS:0;jenkins-hbase17:33343] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,955 DEBUG [RS:0;jenkins-hbase17:33343] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,955 DEBUG [RS:0;jenkins-hbase17:33343] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,955 DEBUG [RS:0;jenkins-hbase17:33343] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,956 DEBUG [RS:0;jenkins-hbase17:33343] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,956 INFO [RS:1;jenkins-hbase17:34931] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,956 DEBUG [RS:0;jenkins-hbase17:33343] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,957 DEBUG [RS:1;jenkins-hbase17:34931] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,957 DEBUG [RS:0;jenkins-hbase17:33343] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:45,957 DEBUG [RS:1;jenkins-hbase17:34931] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,957 DEBUG [RS:0;jenkins-hbase17:33343] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,957 DEBUG [RS:1;jenkins-hbase17:34931] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,957 DEBUG [RS:0;jenkins-hbase17:33343] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,958 DEBUG [RS:1;jenkins-hbase17:34931] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,958 DEBUG [RS:0;jenkins-hbase17:33343] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,958 DEBUG [RS:1;jenkins-hbase17:34931] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,958 DEBUG [RS:1;jenkins-hbase17:34931] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:45,958 DEBUG [RS:1;jenkins-hbase17:34931] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,958 INFO [RS:2;jenkins-hbase17:35473] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,958 DEBUG [RS:0;jenkins-hbase17:33343] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,959 DEBUG [RS:2;jenkins-hbase17:35473] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,958 DEBUG [RS:1;jenkins-hbase17:34931] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,959 DEBUG [RS:2;jenkins-hbase17:35473] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,960 DEBUG [RS:1;jenkins-hbase17:34931] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,960 DEBUG [RS:2;jenkins-hbase17:35473] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,960 DEBUG [RS:1;jenkins-hbase17:34931] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,960 DEBUG [RS:2;jenkins-hbase17:35473] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,960 DEBUG [RS:2;jenkins-hbase17:35473] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,960 DEBUG [RS:2;jenkins-hbase17:35473] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:45,960 DEBUG [RS:2;jenkins-hbase17:35473] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,960 DEBUG [RS:2;jenkins-hbase17:35473] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,960 DEBUG [RS:2;jenkins-hbase17:35473] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,960 DEBUG [RS:2;jenkins-hbase17:35473] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:45,969 INFO [RS:0;jenkins-hbase17:33343] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,969 INFO [RS:0;jenkins-hbase17:33343] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,969 INFO [RS:0;jenkins-hbase17:33343] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,972 INFO [RS:2;jenkins-hbase17:35473] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,976 INFO [RS:2;jenkins-hbase17:35473] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,976 INFO [RS:2;jenkins-hbase17:35473] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,987 INFO [RS:1;jenkins-hbase17:34931] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,987 INFO [RS:1;jenkins-hbase17:34931] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,987 INFO [RS:1;jenkins-hbase17:34931] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:45,992 INFO [RS:0;jenkins-hbase17:33343] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:45,992 INFO [RS:0;jenkins-hbase17:33343] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33343,1689938205105-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:46,001 INFO [RS:2;jenkins-hbase17:35473] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:46,002 INFO [RS:2;jenkins-hbase17:35473] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35473,1689938205409-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:46,006 INFO [RS:1;jenkins-hbase17:34931] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:46,006 INFO [RS:1;jenkins-hbase17:34931] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,34931,1689938205269-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:46,008 DEBUG [jenkins-hbase17:38633] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 11:16:46,008 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:46,008 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:46,009 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:46,009 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:46,009 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:46,016 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,34931,1689938205269, state=OPENING 2023-07-21 11:16:46,017 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:46,017 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=134, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34931,1689938205269}] 2023-07-21 11:16:46,017 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:46,030 INFO [RS:2;jenkins-hbase17:35473] regionserver.Replication(203): jenkins-hbase17.apache.org,35473,1689938205409 started 2023-07-21 11:16:46,030 INFO [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,35473,1689938205409, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:35473, sessionid=0x10187975688001f 2023-07-21 11:16:46,030 DEBUG [RS:2;jenkins-hbase17:35473] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:46,030 DEBUG [RS:2;jenkins-hbase17:35473] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:46,030 DEBUG [RS:2;jenkins-hbase17:35473] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,35473,1689938205409' 2023-07-21 11:16:46,030 DEBUG [RS:2;jenkins-hbase17:35473] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:46,031 DEBUG [RS:2;jenkins-hbase17:35473] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:46,031 DEBUG [RS:2;jenkins-hbase17:35473] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:46,032 DEBUG [RS:2;jenkins-hbase17:35473] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:46,032 DEBUG [RS:2;jenkins-hbase17:35473] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:46,032 DEBUG [RS:2;jenkins-hbase17:35473] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,35473,1689938205409' 2023-07-21 11:16:46,032 DEBUG [RS:2;jenkins-hbase17:35473] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:46,032 DEBUG [RS:2;jenkins-hbase17:35473] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:46,045 DEBUG [RS:2;jenkins-hbase17:35473] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:46,046 INFO [RS:2;jenkins-hbase17:35473] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:16:46,046 INFO [RS:2;jenkins-hbase17:35473] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:16:46,050 INFO [RS:0;jenkins-hbase17:33343] regionserver.Replication(203): jenkins-hbase17.apache.org,33343,1689938205105 started 2023-07-21 11:16:46,051 INFO [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,33343,1689938205105, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:33343, sessionid=0x10187975688001d 2023-07-21 11:16:46,051 DEBUG [RS:0;jenkins-hbase17:33343] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:46,051 DEBUG [RS:0;jenkins-hbase17:33343] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:46,051 DEBUG [RS:0;jenkins-hbase17:33343] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33343,1689938205105' 2023-07-21 11:16:46,051 DEBUG [RS:0;jenkins-hbase17:33343] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:46,051 DEBUG [RS:0;jenkins-hbase17:33343] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:46,052 DEBUG [RS:0;jenkins-hbase17:33343] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:46,052 DEBUG [RS:0;jenkins-hbase17:33343] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:46,052 DEBUG [RS:0;jenkins-hbase17:33343] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:46,052 DEBUG [RS:0;jenkins-hbase17:33343] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33343,1689938205105' 2023-07-21 11:16:46,052 DEBUG [RS:0;jenkins-hbase17:33343] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:46,052 DEBUG [RS:0;jenkins-hbase17:33343] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:46,052 DEBUG [RS:0;jenkins-hbase17:33343] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:46,052 INFO [RS:0;jenkins-hbase17:33343] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:16:46,053 INFO [RS:0;jenkins-hbase17:33343] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:16:46,060 INFO [RS:1;jenkins-hbase17:34931] regionserver.Replication(203): jenkins-hbase17.apache.org,34931,1689938205269 started 2023-07-21 11:16:46,060 INFO [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,34931,1689938205269, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:34931, sessionid=0x10187975688001e 2023-07-21 11:16:46,060 DEBUG [RS:1;jenkins-hbase17:34931] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:46,060 DEBUG [RS:1;jenkins-hbase17:34931] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:46,061 DEBUG [RS:1;jenkins-hbase17:34931] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,34931,1689938205269' 2023-07-21 11:16:46,061 DEBUG [RS:1;jenkins-hbase17:34931] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:46,061 DEBUG [RS:1;jenkins-hbase17:34931] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:46,061 DEBUG [RS:1;jenkins-hbase17:34931] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:46,061 DEBUG [RS:1;jenkins-hbase17:34931] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:46,061 DEBUG [RS:1;jenkins-hbase17:34931] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:46,061 DEBUG [RS:1;jenkins-hbase17:34931] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,34931,1689938205269' 2023-07-21 11:16:46,061 DEBUG [RS:1;jenkins-hbase17:34931] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:46,062 DEBUG [RS:1;jenkins-hbase17:34931] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:46,062 DEBUG [RS:1;jenkins-hbase17:34931] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:46,062 INFO [RS:1;jenkins-hbase17:34931] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:16:46,062 INFO [RS:1;jenkins-hbase17:34931] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:16:46,121 WARN [ReadOnlyZKClient-127.0.0.1:61077@0x5e6dce7c] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 11:16:46,121 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:46,123 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:37198, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:46,123 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34931] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:37198 deadline: 1689938266123, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:46,147 INFO [RS:2;jenkins-hbase17:35473] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C35473%2C1689938205409, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,35473,1689938205409, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:46,155 INFO [RS:0;jenkins-hbase17:33343] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C33343%2C1689938205105, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,33343,1689938205105, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:46,171 INFO [RS:1;jenkins-hbase17:34931] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C34931%2C1689938205269, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34931,1689938205269, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:46,196 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:46,196 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:46,196 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:46,210 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:46,210 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:46,210 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:46,224 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:46,255 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:46,256 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:46,257 INFO [RS:2;jenkins-hbase17:35473] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,35473,1689938205409/jenkins-hbase17.apache.org%2C35473%2C1689938205409.1689938206148 2023-07-21 11:16:46,258 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:46,258 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:46,260 INFO [RS:0;jenkins-hbase17:33343] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,33343,1689938205105/jenkins-hbase17.apache.org%2C33343%2C1689938205105.1689938206155 2023-07-21 11:16:46,268 DEBUG [RS:2;jenkins-hbase17:35473] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK]] 2023-07-21 11:16:46,268 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:37206, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:46,268 INFO [RS:1;jenkins-hbase17:34931] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34931,1689938205269/jenkins-hbase17.apache.org%2C34931%2C1689938205269.1689938206171 2023-07-21 11:16:46,272 DEBUG [RS:0;jenkins-hbase17:33343] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK]] 2023-07-21 11:16:46,276 DEBUG [RS:1;jenkins-hbase17:34931] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK]] 2023-07-21 11:16:46,288 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 11:16:46,288 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:46,290 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C34931%2C1689938205269.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34931,1689938205269, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:46,307 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:46,307 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:46,314 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:46,316 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,34931,1689938205269/jenkins-hbase17.apache.org%2C34931%2C1689938205269.meta.1689938206291.meta 2023-07-21 11:16:46,317 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK]] 2023-07-21 11:16:46,317 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:46,317 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:16:46,317 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 11:16:46,317 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 11:16:46,317 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 11:16:46,317 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:46,317 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 11:16:46,317 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 11:16:46,318 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:16:46,319 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:46,319 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:46,320 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:16:46,327 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/3536ab124fb54a2fb8a540fbd6311b09 2023-07-21 11:16:46,332 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/5c902cb369004c06a80ca0785e879dc9 2023-07-21 11:16:46,332 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:46,332 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:16:46,333 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:46,334 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:46,334 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:16:46,344 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce1c3c0335804360b6540dfdf53da436 2023-07-21 11:16:46,344 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier/ce1c3c0335804360b6540dfdf53da436 2023-07-21 11:16:46,353 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f8e5cb731248424f9ac24182335eb922 2023-07-21 11:16:46,353 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier/f8e5cb731248424f9ac24182335eb922 2023-07-21 11:16:46,354 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:46,354 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:16:46,355 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:46,355 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:46,355 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:16:46,369 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/176c58e30866445dac88d784f537577a 2023-07-21 11:16:46,374 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/4749bcea1e764757be2898f2ea93c5d8 2023-07-21 11:16:46,374 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:46,375 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:46,376 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:46,378 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:16:46,379 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:16:46,380 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=175; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11931776640, jitterRate=0.11123329401016235}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:16:46,380 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:16:46,381 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=135, masterSystemTime=1689938206224 2023-07-21 11:16:46,389 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 11:16:46,390 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 11:16:46,392 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,34931,1689938205269, state=OPEN 2023-07-21 11:16:46,393 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:46,393 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:46,401 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=134 2023-07-21 11:16:46,401 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=134, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34931,1689938205269 in 378 msec 2023-07-21 11:16:46,403 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-21 11:16:46,403 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 549 msec 2023-07-21 11:16:46,450 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:46,450 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:41949 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:41949 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:46,452 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:41949 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:41949 2023-07-21 11:16:46,593 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:41949 this server is in the failed servers list 2023-07-21 11:16:46,804 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:41949 this server is in the failed servers list 2023-07-21 11:16:47,109 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:41949 this server is in the failed servers list 2023-07-21 11:16:47,327 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 11:16:47,461 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1622ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1521ms 2023-07-21 11:16:47,618 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:41949 this server is in the failed servers list 2023-07-21 11:16:48,624 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:41949 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:41949 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 11:16:48,625 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:41949 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:41949 2023-07-21 11:16:48,965 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3126ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3025ms 2023-07-21 11:16:50,367 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4528ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-21 11:16:50,367 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 11:16:50,370 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=2bd94f497343684e2f5a451c6e430d4d, regionState=OPEN, lastHost=jenkins-hbase17.apache.org,43985,1689938192366, regionLocation=jenkins-hbase17.apache.org,43985,1689938192366, openSeqNum=15 2023-07-21 11:16:50,370 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=77ef890485c37098a66e3a9a030a0490, regionState=OPEN, lastHost=jenkins-hbase17.apache.org,41949,1689938192168, regionLocation=jenkins-hbase17.apache.org,41949,1689938192168, openSeqNum=2 2023-07-21 11:16:50,370 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=2782e41606006289532e239f665ea4eb, regionState=OPEN, lastHost=jenkins-hbase17.apache.org,41949,1689938192168, regionLocation=jenkins-hbase17.apache.org,41949,1689938192168, openSeqNum=83 2023-07-21 11:16:50,370 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 11:16:50,370 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689938270370 2023-07-21 11:16:50,370 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689938330370 2023-07-21 11:16:50,370 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 2 msec 2023-07-21 11:16:50,387 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,43529,1689938192499 had 1 regions 2023-07-21 11:16:50,387 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,41949,1689938192168 had 2 regions 2023-07-21 11:16:50,387 INFO [PEWorker-1] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,43985,1689938192366 had 1 regions 2023-07-21 11:16:50,388 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38633,1689938204808-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:50,388 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38633,1689938204808-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:50,388 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38633,1689938204808-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:50,388 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:38633, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:50,388 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:50,388 WARN [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. is NOT online; state={2bd94f497343684e2f5a451c6e430d4d state=OPEN, ts=1689938210370, server=jenkins-hbase17.apache.org,43985,1689938192366}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-21 11:16:50,388 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=131, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,43985,1689938192366, splitWal=true, meta=false, isMeta: false 2023-07-21 11:16:50,389 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=132, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,43529,1689938192499, splitWal=true, meta=true, isMeta: false 2023-07-21 11:16:50,388 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=133, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,41949,1689938192168, splitWal=true, meta=false, isMeta: false 2023-07-21 11:16:50,391 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43985,1689938192366-splitting 2023-07-21 11:16:50,392 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43985,1689938192366-splitting dir is empty, no logs to split. 2023-07-21 11:16:50,393 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase17.apache.org,43985,1689938192366 WAL count=0, meta=false 2023-07-21 11:16:50,393 WARN [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase17.apache.org,43985,1689938192366/hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d., unknown_server=jenkins-hbase17.apache.org,41949,1689938192168/hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490., unknown_server=jenkins-hbase17.apache.org,41949,1689938192168/hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:50,395 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43529,1689938192499-splitting dir is empty, no logs to split. 2023-07-21 11:16:50,395 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase17.apache.org,43529,1689938192499 WAL count=0, meta=false 2023-07-21 11:16:50,395 DEBUG [PEWorker-5] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,41949,1689938192168-splitting 2023-07-21 11:16:50,396 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,41949,1689938192168-splitting dir is empty, no logs to split. 2023-07-21 11:16:50,396 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase17.apache.org,41949,1689938192168 WAL count=0, meta=false 2023-07-21 11:16:50,399 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43985,1689938192366-splitting dir is empty, no logs to split. 2023-07-21 11:16:50,399 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase17.apache.org,43985,1689938192366 WAL count=0, meta=false 2023-07-21 11:16:50,399 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,43985,1689938192366 WAL splitting is done? wals=0, meta=false 2023-07-21 11:16:50,401 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43529,1689938192499-splitting dir is empty, no logs to split. 2023-07-21 11:16:50,401 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase17.apache.org,43529,1689938192499 WAL count=0, meta=false 2023-07-21 11:16:50,401 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,43529,1689938192499 WAL splitting is done? wals=0, meta=false 2023-07-21 11:16:50,402 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,41949,1689938192168-splitting dir is empty, no logs to split. 2023-07-21 11:16:50,402 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase17.apache.org,41949,1689938192168 WAL count=0, meta=false 2023-07-21 11:16:50,402 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,41949,1689938192168 WAL splitting is done? wals=0, meta=false 2023-07-21 11:16:50,404 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,43529,1689938192499 after splitting done 2023-07-21 11:16:50,404 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase17.apache.org,43529,1689938192499 from processing; numProcessing=2 2023-07-21 11:16:50,406 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,43529,1689938192499, splitWal=true, meta=true in 4.6200 sec 2023-07-21 11:16:50,409 INFO [PEWorker-1] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,43985,1689938192366 failed, ignore...File hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,43985,1689938192366-splitting does not exist. 2023-07-21 11:16:50,411 INFO [PEWorker-5] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,41949,1689938192168 failed, ignore...File hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,41949,1689938192168-splitting does not exist. 2023-07-21 11:16:50,411 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=131, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, ASSIGN}] 2023-07-21 11:16:50,411 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=133, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, ASSIGN}, {pid=138, ppid=133, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=77ef890485c37098a66e3a9a030a0490, ASSIGN}] 2023-07-21 11:16:50,413 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=131, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, ASSIGN 2023-07-21 11:16:50,413 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=133, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, ASSIGN 2023-07-21 11:16:50,413 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=133, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=77ef890485c37098a66e3a9a030a0490, ASSIGN 2023-07-21 11:16:50,414 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=138, ppid=133, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=77ef890485c37098a66e3a9a030a0490, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 11:16:50,414 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=133, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 11:16:50,414 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=131, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 11:16:50,414 DEBUG [jenkins-hbase17:38633] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 11:16:50,414 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:50,414 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:50,415 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:50,415 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:50,415 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-21 11:16:50,417 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=77ef890485c37098a66e3a9a030a0490, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:50,417 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689938210417"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938210417"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938210417"}]},"ts":"1689938210417"} 2023-07-21 11:16:50,418 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:50,418 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938210418"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938210418"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938210418"}]},"ts":"1689938210418"} 2023-07-21 11:16:50,420 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=138, state=RUNNABLE; OpenRegionProcedure 77ef890485c37098a66e3a9a030a0490, server=jenkins-hbase17.apache.org,34931,1689938205269}] 2023-07-21 11:16:50,421 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=137, state=RUNNABLE; OpenRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,35473,1689938205409}] 2023-07-21 11:16:50,567 DEBUG [jenkins-hbase17:38633] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 11:16:50,567 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 11:16:50,567 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 11:16:50,567 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 11:16:50,567 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 11:16:50,567 DEBUG [jenkins-hbase17:38633] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 11:16:50,568 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=2bd94f497343684e2f5a451c6e430d4d, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:50,568 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938210568"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938210568"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938210568"}]},"ts":"1689938210568"} 2023-07-21 11:16:50,570 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=136, state=RUNNABLE; OpenRegionProcedure 2bd94f497343684e2f5a451c6e430d4d, server=jenkins-hbase17.apache.org,34931,1689938205269}] 2023-07-21 11:16:50,575 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:50,575 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:50,576 INFO [RS-EventLoopGroup-16-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:59934, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:50,578 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:50,578 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2bd94f497343684e2f5a451c6e430d4d, NAME => 'hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:50,578 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:50,579 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:50,579 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:50,579 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:50,581 INFO [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:50,582 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:50,582 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2782e41606006289532e239f665ea4eb, NAME => 'hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:50,582 DEBUG [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info 2023-07-21 11:16:50,582 DEBUG [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info 2023-07-21 11:16:50,582 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:16:50,582 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. service=MultiRowMutationService 2023-07-21 11:16:50,583 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 11:16:50,583 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:50,583 INFO [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2bd94f497343684e2f5a451c6e430d4d columnFamilyName info 2023-07-21 11:16:50,583 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:50,583 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:50,583 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:50,584 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:50,584 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m 2023-07-21 11:16:50,585 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m 2023-07-21 11:16:50,585 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2782e41606006289532e239f665ea4eb columnFamilyName m 2023-07-21 11:16:50,591 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for db07fdd1032644e6999e588b237b5bc3 2023-07-21 11:16:50,591 DEBUG [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info/db07fdd1032644e6999e588b237b5bc3 2023-07-21 11:16:50,591 INFO [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] regionserver.HStore(310): Store=2bd94f497343684e2f5a451c6e430d4d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:50,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:50,593 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/aeb270fc9f7943c29e25e4ef55952a60 2023-07-21 11:16:50,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:50,597 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:50,598 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2bd94f497343684e2f5a451c6e430d4d; next sequenceid=18; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10364807200, jitterRate=-0.034702107310295105}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:50,598 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2bd94f497343684e2f5a451c6e430d4d: 2023-07-21 11:16:50,599 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d., pid=141, masterSystemTime=1689938210573 2023-07-21 11:16:50,601 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:50,601 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:50,602 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:50,602 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=2bd94f497343684e2f5a451c6e430d4d, regionState=OPEN, openSeqNum=18, regionLocation=jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:50,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 77ef890485c37098a66e3a9a030a0490, NAME => 'hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:50,602 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938210602"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938210602"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938210602"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938210602"}]},"ts":"1689938210602"} 2023-07-21 11:16:50,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:50,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:50,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:50,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:50,604 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:50,605 DEBUG [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/q 2023-07-21 11:16:50,605 DEBUG [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/q 2023-07-21 11:16:50,605 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 77ef890485c37098a66e3a9a030a0490 columnFamilyName q 2023-07-21 11:16:50,606 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] regionserver.HStore(310): Store=77ef890485c37098a66e3a9a030a0490/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:50,606 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:50,607 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=136 2023-07-21 11:16:50,607 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=136, state=SUCCESS; OpenRegionProcedure 2bd94f497343684e2f5a451c6e430d4d, server=jenkins-hbase17.apache.org,34931,1689938205269 in 35 msec 2023-07-21 11:16:50,607 DEBUG [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/u 2023-07-21 11:16:50,608 DEBUG [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/u 2023-07-21 11:16:50,608 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 77ef890485c37098a66e3a9a030a0490 columnFamilyName u 2023-07-21 11:16:50,608 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/caeb8cb159f544518af404b183b96da3 2023-07-21 11:16:50,608 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(310): Store=2782e41606006289532e239f665ea4eb/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:50,609 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] regionserver.HStore(310): Store=77ef890485c37098a66e3a9a030a0490/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:50,609 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=131 2023-07-21 11:16:50,609 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,43985,1689938192366 after splitting done 2023-07-21 11:16:50,609 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=131, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, ASSIGN in 196 msec 2023-07-21 11:16:50,609 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase17.apache.org,43985,1689938192366 from processing; numProcessing=1 2023-07-21 11:16:50,609 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:50,609 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:50,610 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=131, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,43985,1689938192366, splitWal=true, meta=false in 4.8270 sec 2023-07-21 11:16:50,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:50,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:50,612 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-21 11:16:50,613 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:50,613 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:50,614 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 77ef890485c37098a66e3a9a030a0490; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10120707520, jitterRate=-0.05743566155433655}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-21 11:16:50,614 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2782e41606006289532e239f665ea4eb; next sequenceid=91; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2dc849ca, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:50,614 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 77ef890485c37098a66e3a9a030a0490: 2023-07-21 11:16:50,614 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:50,614 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490., pid=139, masterSystemTime=1689938210573 2023-07-21 11:16:50,615 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb., pid=140, masterSystemTime=1689938210575 2023-07-21 11:16:50,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:50,620 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:50,621 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=77ef890485c37098a66e3a9a030a0490, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:50,621 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:50,621 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689938210621"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938210621"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938210621"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938210621"}]},"ts":"1689938210621"} 2023-07-21 11:16:50,622 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:50,626 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=OPEN, openSeqNum=91, regionLocation=jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:50,626 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938210626"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938210626"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938210626"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938210626"}]},"ts":"1689938210626"} 2023-07-21 11:16:50,632 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=138 2023-07-21 11:16:50,632 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=138, state=SUCCESS; OpenRegionProcedure 77ef890485c37098a66e3a9a030a0490, server=jenkins-hbase17.apache.org,34931,1689938205269 in 208 msec 2023-07-21 11:16:50,634 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=137 2023-07-21 11:16:50,634 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=137, state=SUCCESS; OpenRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,35473,1689938205409 in 208 msec 2023-07-21 11:16:50,635 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=133, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=77ef890485c37098a66e3a9a030a0490, ASSIGN in 221 msec 2023-07-21 11:16:50,636 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=133 2023-07-21 11:16:50,636 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,41949,1689938192168 after splitting done 2023-07-21 11:16:50,636 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=133, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, ASSIGN in 223 msec 2023-07-21 11:16:50,636 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase17.apache.org,41949,1689938192168 from processing; numProcessing=0 2023-07-21 11:16:50,637 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=133, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,41949,1689938192168, splitWal=true, meta=false in 4.8490 sec 2023-07-21 11:16:50,637 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:50,638 INFO [RS-EventLoopGroup-16-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:59936, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:50,641 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 11:16:50,641 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 11:16:50,654 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:50,654 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:50,654 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:50,655 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-21 11:16:50,655 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 11:16:51,389 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-21 11:16:51,402 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 11:16:51,403 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 11:16:51,403 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.834sec 2023-07-21 11:16:51,404 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 11:16:51,404 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 11:16:51,404 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 11:16:51,404 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38633,1689938204808-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 11:16:51,404 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38633,1689938204808-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 11:16:51,409 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 11:16:51,474 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(139): Connect 0x09f52c8a to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:51,478 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5777d26f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:51,480 DEBUG [hconnection-0xbdd2ac8-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:51,482 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:58210, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:51,489 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-21 11:16:51,489 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x09f52c8a to 127.0.0.1:61077 2023-07-21 11:16:51,489 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:51,491 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase17.apache.org:38633 after: jenkins-hbase17.apache.org:38633 2023-07-21 11:16:51,491 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(139): Connect 0x7ed9d4df to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:51,495 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1cb02aa6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:51,495 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:51,496 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 11:16:51,497 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38532, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 11:16:51,498 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-21 11:16:51,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=false 2023-07-21 11:16:51,499 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(139): Connect 0x00567069 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:51,510 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33f6fb1b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:51,510 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:51,516 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:51,517 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101879756880027 connected 2023-07-21 11:16:51,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:51,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:51,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:51,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:51,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:51,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:51,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:51,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:51,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:51,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:51,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:51,531 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 11:16:51,542 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:16:51,543 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:51,543 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:51,543 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:16:51,543 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:51,543 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:16:51,544 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:16:51,544 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:38565 2023-07-21 11:16:51,548 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:16:51,550 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:16:51,551 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:51,552 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:51,553 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38565 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:51,558 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:385650x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:51,560 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(162): regionserver:385650x0, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:16:51,561 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38565-0x101879756880028 connected 2023-07-21 11:16:51,562 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 11:16:51,563 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:16:51,563 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38565 2023-07-21 11:16:51,563 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38565 2023-07-21 11:16:51,564 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38565 2023-07-21 11:16:51,564 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38565 2023-07-21 11:16:51,567 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38565 2023-07-21 11:16:51,570 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:16:51,570 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:16:51,570 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:16:51,571 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:16:51,571 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:16:51,571 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:16:51,572 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:16:51,572 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 34897 2023-07-21 11:16:51,572 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:51,579 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:51,579 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@288c3061{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:16:51,579 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:51,579 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@14185deb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:16:51,698 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:16:51,699 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:16:51,699 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:16:51,700 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 11:16:51,701 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:51,702 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4b965b66{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-34897-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7268387232160549691/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:51,704 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@55d5015d{HTTP/1.1, (http/1.1)}{0.0.0.0:34897} 2023-07-21 11:16:51,704 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @60687ms 2023-07-21 11:16:51,714 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:51,717 DEBUG [RS:3;jenkins-hbase17:38565] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:51,719 DEBUG [RS:3;jenkins-hbase17:38565] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:51,719 DEBUG [RS:3;jenkins-hbase17:38565] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:51,729 DEBUG [RS:3;jenkins-hbase17:38565] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:51,732 DEBUG [RS:3;jenkins-hbase17:38565] zookeeper.ReadOnlyZKClient(139): Connect 0x6b5cf2fc to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:51,737 DEBUG [RS:3;jenkins-hbase17:38565] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@659b01e7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:51,737 DEBUG [RS:3;jenkins-hbase17:38565] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61a4998c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:51,749 DEBUG [RS:3;jenkins-hbase17:38565] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase17:38565 2023-07-21 11:16:51,749 INFO [RS:3;jenkins-hbase17:38565] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:51,749 INFO [RS:3;jenkins-hbase17:38565] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:51,749 DEBUG [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:51,750 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,38633,1689938204808 with isa=jenkins-hbase17.apache.org/136.243.18.41:38565, startcode=1689938211542 2023-07-21 11:16:51,750 DEBUG [RS:3;jenkins-hbase17:38565] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:51,769 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:56559, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.11 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:51,769 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38633] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:51,769 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:51,771 DEBUG [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:51,771 DEBUG [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:51,771 DEBUG [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40421 2023-07-21 11:16:51,772 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:51,772 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:51,772 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:51,772 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:51,772 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:51,773 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,38565,1689938211542] 2023-07-21 11:16:51,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:51,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:51,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:51,774 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 11:16:51,774 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:51,774 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:51,774 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:51,774 DEBUG [RS:3;jenkins-hbase17:38565] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:51,774 WARN [RS:3;jenkins-hbase17:38565] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:51,774 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:51,775 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:51,775 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:51,775 INFO [RS:3;jenkins-hbase17:38565] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:51,775 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:51,775 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 11:16:51,775 DEBUG [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:51,776 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:51,776 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:51,798 DEBUG [RS:3;jenkins-hbase17:38565] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:51,799 DEBUG [RS:3;jenkins-hbase17:38565] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:51,799 DEBUG [RS:3;jenkins-hbase17:38565] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:51,800 DEBUG [RS:3;jenkins-hbase17:38565] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:51,801 DEBUG [RS:3;jenkins-hbase17:38565] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:51,801 INFO [RS:3;jenkins-hbase17:38565] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:51,804 INFO [RS:3;jenkins-hbase17:38565] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:51,807 INFO [RS:3;jenkins-hbase17:38565] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:51,807 INFO [RS:3;jenkins-hbase17:38565] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:51,808 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:51,811 INFO [RS:3;jenkins-hbase17:38565] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:51,812 DEBUG [RS:3;jenkins-hbase17:38565] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:51,812 DEBUG [RS:3;jenkins-hbase17:38565] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:51,812 DEBUG [RS:3;jenkins-hbase17:38565] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:51,812 DEBUG [RS:3;jenkins-hbase17:38565] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:51,812 DEBUG [RS:3;jenkins-hbase17:38565] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:51,812 DEBUG [RS:3;jenkins-hbase17:38565] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:51,813 DEBUG [RS:3;jenkins-hbase17:38565] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:51,813 DEBUG [RS:3;jenkins-hbase17:38565] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:51,813 DEBUG [RS:3;jenkins-hbase17:38565] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:51,813 DEBUG [RS:3;jenkins-hbase17:38565] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:51,819 INFO [RS:3;jenkins-hbase17:38565] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:51,819 INFO [RS:3;jenkins-hbase17:38565] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:51,819 INFO [RS:3;jenkins-hbase17:38565] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:51,821 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 11:16:51,829 INFO [RS:3;jenkins-hbase17:38565] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:51,829 INFO [RS:3;jenkins-hbase17:38565] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38565,1689938211542-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:51,844 INFO [RS:3;jenkins-hbase17:38565] regionserver.Replication(203): jenkins-hbase17.apache.org,38565,1689938211542 started 2023-07-21 11:16:51,844 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,38565,1689938211542, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:38565, sessionid=0x101879756880028 2023-07-21 11:16:51,844 DEBUG [RS:3;jenkins-hbase17:38565] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:51,844 DEBUG [RS:3;jenkins-hbase17:38565] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:51,844 DEBUG [RS:3;jenkins-hbase17:38565] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38565,1689938211542' 2023-07-21 11:16:51,844 DEBUG [RS:3;jenkins-hbase17:38565] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:51,845 DEBUG [RS:3;jenkins-hbase17:38565] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:51,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:51,845 DEBUG [RS:3;jenkins-hbase17:38565] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:51,845 DEBUG [RS:3;jenkins-hbase17:38565] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:51,845 DEBUG [RS:3;jenkins-hbase17:38565] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:51,845 DEBUG [RS:3;jenkins-hbase17:38565] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38565,1689938211542' 2023-07-21 11:16:51,846 DEBUG [RS:3;jenkins-hbase17:38565] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:51,846 DEBUG [RS:3;jenkins-hbase17:38565] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:51,846 DEBUG [RS:3;jenkins-hbase17:38565] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:51,846 INFO [RS:3;jenkins-hbase17:38565] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:16:51,846 INFO [RS:3;jenkins-hbase17:38565] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:16:51,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:51,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:51,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:51,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:51,852 DEBUG [hconnection-0x3b9be96-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:51,853 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:58216, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:51,865 DEBUG [hconnection-0x3b9be96-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:51,867 INFO [RS-EventLoopGroup-16-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:59938, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:51,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:51,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:51,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:38633] to rsgroup master 2023-07-21 11:16:51,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:38633 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:51,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] ipc.CallRunner(144): callId: 25 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:38532 deadline: 1689939411872, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:38633 is either offline or it does not exist. 2023-07-21 11:16:51,873 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:38633 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor64.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:38633 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:51,876 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:51,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:51,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:51,878 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33343, jenkins-hbase17.apache.org:34931, jenkins-hbase17.apache.org:35473, jenkins-hbase17.apache.org:38565], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:51,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:51,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:51,934 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 11:16:51,939 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 11:16:51,939 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-21 11:16:51,940 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 11:16:51,948 INFO [RS:3;jenkins-hbase17:38565] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38565%2C1689938211542, suffix=, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,38565,1689938211542, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:51,960 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=562 (was 528) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1662154281_17 at /127.0.0.1:57222 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741901_1077] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp903767921-1830 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost.localdomain:36511 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp907437422-2119 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-15173d36-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x1121a5df-SendThread(127.0.0.1:61077) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x34e57acf sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1323183535.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x64470b54-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1820264138-1864 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1820264138-1861 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase17:35473-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741902_1078, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x2f4c2130-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33343 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1820264138-1858-acceptor-0@306172fd-ServerConnector@2cc75ae4{HTTP/1.1, (http/1.1)}{0.0.0.0:41201} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842188867-1799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp197946112-1903 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741903_1079, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33343 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase17:34931Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp907437422-2124 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34931 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (2036392764) connection to localhost.localdomain/127.0.0.1:36511 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:2;jenkins-hbase17:35473 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741903_1079, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2036392764) connection to localhost.localdomain/127.0.0.1:36511 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1842188867-1800 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741905_1081, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x64470b54-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_296783870_17 at /127.0.0.1:50936 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_296783870_17 at /127.0.0.1:50950 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741904_1080, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_190541568_17 at /127.0.0.1:35186 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp191773570-1893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp903767921-1832 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp907437422-2117 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1543002837.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae-prefix:jenkins-hbase17.apache.org,34931,1689938205269.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741903_1079, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x64470b54-metaLookup-shared--pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: jenkins-hbase17:33343Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741905_1081, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938205838 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1357116077_17 at /127.0.0.1:57234 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741902_1078] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase17:33343 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x5e6dce7c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase17:38633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x5e6dce7c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1323183535.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x2f4c2130-SendThread(127.0.0.1:61077) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x6b5cf2fc-SendThread(127.0.0.1:61077) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase17:35473Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase17:34931-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-91696965_17 at /127.0.0.1:49974 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x6b5cf2fc-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp903767921-1831 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-91696965_17 at /127.0.0.1:44588 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741904_1080] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_190541568_17 at /127.0.0.1:50944 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34931 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (2036392764) connection to localhost.localdomain/127.0.0.1:36511 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33343 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-91696965_17 at /127.0.0.1:52216 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741904_1080] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33343 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost.localdomain:36511 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741904_1080, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase17:38565 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-91696965_17 at /127.0.0.1:52224 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741905_1081] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp197946112-1900 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1543002837.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33343 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost.localdomain:36511 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1662154281_17 at /127.0.0.1:52188 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741901_1077] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.10@localhost.localdomain:36511 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2036392764) connection to localhost.localdomain/127.0.0.1:36511 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33343 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: region-location-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2036392764) connection to localhost.localdomain/127.0.0.1:36511 from jenkins.hfs.11 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741901_1077, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp191773570-1891 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp197946112-1904 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842188867-1802 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase17:38633 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34931 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x00567069-SendThread(127.0.0.1:61077) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Session-HouseKeeper-6196df0c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp903767921-1833 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae-prefix:jenkins-hbase17.apache.org,35473,1689938205409 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842188867-1798 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938205838 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741901_1077, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae-prefix:jenkins-hbase17.apache.org,33343,1689938205105 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp903767921-1829 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1357116077_17 at /127.0.0.1:52200 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741902_1078] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-173c6123-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x1121a5df sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1323183535.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842188867-1797-acceptor-0@497c3c2d-ServerConnector@1907347{HTTP/1.1, (http/1.1)}{0.0.0.0:40421} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33343 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1820264138-1857 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1543002837.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1820264138-1859 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp903767921-1827 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1543002837.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,34157,1689938191982 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData-prefix:jenkins-hbase17.apache.org,38633,1689938204808 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae-prefix:jenkins-hbase17.apache.org,34931,1689938205269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp907437422-2118-acceptor-0@5465347a-ServerConnector@55d5015d{HTTP/1.1, (http/1.1)}{0.0.0.0:34897} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase17:34931 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:38565Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741905_1081, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34931 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1820264138-1862 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x1121a5df-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp903767921-1834 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34931 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1831695499_17 at /127.0.0.1:57248 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741903_1079] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-56cdf355-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x64470b54-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2036392764) connection to localhost.localdomain/127.0.0.1:36511 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x34e57acf-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_190541568_17 at /127.0.0.1:35180 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x00567069-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp191773570-1890 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741902_1078, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp197946112-1899 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1543002837.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x773c5dbd sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1323183535.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp197946112-1905 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1662154281_17 at /127.0.0.1:44562 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741901_1077] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34931 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp197946112-1902-acceptor-0@158fef05-ServerConnector@50b8409{HTTP/1.1, (http/1.1)}{0.0.0.0:36623} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x2f4c2130 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1323183535.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x773c5dbd-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost.localdomain:36511 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase17:33343-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33343 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1820264138-1860 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842188867-1801 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1820264138-1863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp197946112-1898 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1543002837.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x7ed9d4df sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1323183535.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x64470b54-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp191773570-1889 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33343 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3b9be96-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741902_1078, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x6b5cf2fc sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1323183535.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34931 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34931 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-16-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842188867-1796 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1543002837.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x64470b54-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x00567069 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1323183535.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-1e4504c7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x34e57acf-SendThread(127.0.0.1:61077) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp191773570-1892 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1831695499_17 at /127.0.0.1:52204 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741903_1079] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33343 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp191773570-1894 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x773c5dbd-SendThread(127.0.0.1:61077) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x7ed9d4df-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp907437422-2122 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1357116077_17 at /127.0.0.1:44574 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741902_1078] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1831695499_17 at /127.0.0.1:44576 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741903_1079] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2036392764) connection to localhost.localdomain/127.0.0.1:36511 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp191773570-1888-acceptor-0@4548b5e1-ServerConnector@212a25a6{HTTP/1.1, (http/1.1)}{0.0.0.0:43393} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34931 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34931 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x64470b54-metaLookup-shared--pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-91696965_17 at /127.0.0.1:57260 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741905_1081] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-91696965_17 at /127.0.0.1:57252 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741904_1080] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741904_1080, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp907437422-2123 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-17-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x64470b54-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x7ed9d4df-SendThread(127.0.0.1:61077) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3b9be96-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp903767921-1828-acceptor-0@37461882-ServerConnector@3e322de{HTTP/1.1, (http/1.1)}{0.0.0.0:45593} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp907437422-2121 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp907437422-2120 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase17:38565-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842188867-1803 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp197946112-1901 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1543002837.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp191773570-1887 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1543002837.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-17-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost.localdomain:36511 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61077@0x5e6dce7c-SendThread(127.0.0.1:61077) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-91696965_17 at /127.0.0.1:44602 [Receiving block BP-1138614856-136.243.18.41-1689938153171:blk_1073741905_1081] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1138614856-136.243.18.41-1689938153171:blk_1073741901_1077, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=844 (was 811) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=727 (was 840), ProcessCount=186 (was 186), AvailableMemoryMB=2565 (was 3187) 2023-07-21 11:16:51,962 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=562 is superior to 500 2023-07-21 11:16:51,991 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=562, OpenFileDescriptor=844, MaxFileDescriptor=60000, SystemLoadAverage=727, ProcessCount=186, AvailableMemoryMB=2554 2023-07-21 11:16:51,991 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=562 is superior to 500 2023-07-21 11:16:51,991 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(132): testClearDeadServers 2023-07-21 11:16:52,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:52,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:52,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:52,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:52,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:52,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:52,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:52,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:52,023 DEBUG [RS-EventLoopGroup-17-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:52,023 DEBUG [RS-EventLoopGroup-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:52,024 DEBUG [RS-EventLoopGroup-17-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:52,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:52,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:52,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:52,032 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 11:16:52,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:52,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:52,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:52,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:52,036 INFO [RS:3;jenkins-hbase17:38565] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,38565,1689938211542/jenkins-hbase17.apache.org%2C38565%2C1689938211542.1689938211949 2023-07-21 11:16:52,037 DEBUG [RS:3;jenkins-hbase17:38565] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK]] 2023-07-21 11:16:52,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:52,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:52,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:52,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:38633] to rsgroup master 2023-07-21 11:16:52,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:38633 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:52,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] ipc.CallRunner(144): callId: 53 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:38532 deadline: 1689939412045, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:38633 is either offline or it does not exist. 2023-07-21 11:16:52,046 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:38633 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor64.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:38633 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:52,047 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:52,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:52,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:52,049 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33343, jenkins-hbase17.apache.org:34931, jenkins-hbase17.apache.org:35473, jenkins-hbase17.apache.org:38565], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:52,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:52,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:52,050 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBasics(214): testClearDeadServers 2023-07-21 11:16:52,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:52,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:52,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup Group_testClearDeadServers_1036591474 2023-07-21 11:16:52,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:52,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:52,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1036591474 2023-07-21 11:16:52,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:52,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:52,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:52,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:52,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33343, jenkins-hbase17.apache.org:34931, jenkins-hbase17.apache.org:35473] to rsgroup Group_testClearDeadServers_1036591474 2023-07-21 11:16:52,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:52,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:52,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1036591474 2023-07-21 11:16:52,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:52,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(238): Moving server region 2bd94f497343684e2f5a451c6e430d4d, which do not belong to RSGroup Group_testClearDeadServers_1036591474 2023-07-21 11:16:52,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] procedure2.ProcedureExecutor(1029): Stored pid=142, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, REOPEN/MOVE 2023-07-21 11:16:52,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(238): Moving server region 77ef890485c37098a66e3a9a030a0490, which do not belong to RSGroup Group_testClearDeadServers_1036591474 2023-07-21 11:16:52,068 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, REOPEN/MOVE 2023-07-21 11:16:52,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:quota, region=77ef890485c37098a66e3a9a030a0490, REOPEN/MOVE 2023-07-21 11:16:52,072 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=2bd94f497343684e2f5a451c6e430d4d, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:52,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testClearDeadServers_1036591474 2023-07-21 11:16:52,073 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:quota, region=77ef890485c37098a66e3a9a030a0490, REOPEN/MOVE 2023-07-21 11:16:52,073 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938212072"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938212072"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938212072"}]},"ts":"1689938212072"} 2023-07-21 11:16:52,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] procedure2.ProcedureExecutor(1029): Stored pid=144, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 11:16:52,075 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=77ef890485c37098a66e3a9a030a0490, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:52,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(238): Moving server region 2782e41606006289532e239f665ea4eb, which do not belong to RSGroup Group_testClearDeadServers_1036591474 2023-07-21 11:16:52,075 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689938212075"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938212075"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938212075"}]},"ts":"1689938212075"} 2023-07-21 11:16:52,075 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 11:16:52,076 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=142, state=RUNNABLE; CloseRegionProcedure 2bd94f497343684e2f5a451c6e430d4d, server=jenkins-hbase17.apache.org,34931,1689938205269}] 2023-07-21 11:16:52,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] procedure2.ProcedureExecutor(1029): Stored pid=145, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE 2023-07-21 11:16:52,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(286): Moving 4 region(s) to group default, current retry=0 2023-07-21 11:16:52,076 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=143, state=RUNNABLE; CloseRegionProcedure 77ef890485c37098a66e3a9a030a0490, server=jenkins-hbase17.apache.org,34931,1689938205269}] 2023-07-21 11:16:52,076 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE 2023-07-21 11:16:52,077 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,34931,1689938205269, state=CLOSING 2023-07-21 11:16:52,077 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:52,079 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938212077"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938212077"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938212077"}]},"ts":"1689938212077"} 2023-07-21 11:16:52,079 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:52,079 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=148, ppid=144, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34931,1689938205269}] 2023-07-21 11:16:52,079 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:52,082 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=145, state=RUNNABLE; CloseRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,35473,1689938205409}] 2023-07-21 11:16:52,087 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=149, ppid=145, state=RUNNABLE; CloseRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:52,232 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:52,233 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-21 11:16:52,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 77ef890485c37098a66e3a9a030a0490, disabling compactions & flushes 2023-07-21 11:16:52,234 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:52,235 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:16:52,235 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:52,235 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:16:52,235 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. after waiting 0 ms 2023-07-21 11:16:52,235 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:52,235 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:16:52,235 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:16:52,235 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:16:52,235 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.83 KB heapSize=7 KB 2023-07-21 11:16:52,239 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 11:16:52,241 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:52,241 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 77ef890485c37098a66e3a9a030a0490: 2023-07-21 11:16:52,241 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 77ef890485c37098a66e3a9a030a0490 move to jenkins-hbase17.apache.org,38565,1689938211542 record at close sequenceid=5 2023-07-21 11:16:52,243 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=147, ppid=143, state=RUNNABLE; CloseRegionProcedure 77ef890485c37098a66e3a9a030a0490, server=jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:52,243 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:52,243 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:52,244 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2bd94f497343684e2f5a451c6e430d4d, disabling compactions & flushes 2023-07-21 11:16:52,244 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:52,244 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:52,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. after waiting 0 ms 2023-07-21 11:16:52,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:52,257 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.83 KB at sequenceid=186 (bloomFilter=false), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/beb74a5d244f4aa1a3f983de3a1805bc 2023-07-21 11:16:52,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/recovered.edits/20.seqid, newMaxSeqId=20, maxSeqId=17 2023-07-21 11:16:52,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:52,258 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2bd94f497343684e2f5a451c6e430d4d: 2023-07-21 11:16:52,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 2bd94f497343684e2f5a451c6e430d4d move to jenkins-hbase17.apache.org,38565,1689938211542 record at close sequenceid=18 2023-07-21 11:16:52,260 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=146, ppid=142, state=RUNNABLE; CloseRegionProcedure 2bd94f497343684e2f5a451c6e430d4d, server=jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:52,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:52,266 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/beb74a5d244f4aa1a3f983de3a1805bc as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/beb74a5d244f4aa1a3f983de3a1805bc 2023-07-21 11:16:52,272 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/beb74a5d244f4aa1a3f983de3a1805bc, entries=33, sequenceid=186, filesize=8.6 K 2023-07-21 11:16:52,273 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.83 KB/3921, heapSize ~6.48 KB/6640, currentSize=0 B/0 for 1588230740 in 38ms, sequenceid=186, compaction requested=true 2023-07-21 11:16:52,285 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/recovered.edits/189.seqid, newMaxSeqId=189, maxSeqId=174 2023-07-21 11:16:52,286 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:52,287 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:16:52,287 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:16:52,287 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase17.apache.org,38565,1689938211542 record at close sequenceid=186 2023-07-21 11:16:52,290 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-21 11:16:52,290 WARN [PEWorker-1] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-21 11:16:52,291 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=144 2023-07-21 11:16:52,291 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=144, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34931,1689938205269 in 211 msec 2023-07-21 11:16:52,292 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=144, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,38565,1689938211542; forceNewPlan=false, retain=false 2023-07-21 11:16:52,443 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,38565,1689938211542, state=OPENING 2023-07-21 11:16:52,445 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:52,445 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=144, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,38565,1689938211542}] 2023-07-21 11:16:52,446 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:52,602 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:52,602 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:52,603 INFO [RS-EventLoopGroup-17-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:42426, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:52,608 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 11:16:52,608 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:52,610 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38565%2C1689938211542.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,38565,1689938211542, archiveDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs, maxLogs=32 2023-07-21 11:16:52,627 DEBUG [RS-EventLoopGroup-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK] 2023-07-21 11:16:52,627 DEBUG [RS-EventLoopGroup-17-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK] 2023-07-21 11:16:52,627 DEBUG [RS-EventLoopGroup-17-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK] 2023-07-21 11:16:52,633 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,38565,1689938211542/jenkins-hbase17.apache.org%2C38565%2C1689938211542.meta.1689938212610.meta 2023-07-21 11:16:52,633 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44393,DS-ea57644f-08ea-41f6-8f79-0bb7d99d55a1,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-b96b1104-46b1-4a71-a873-af9769219804,DISK], DatanodeInfoWithStorage[127.0.0.1:36321,DS-520c98cd-48f2-458b-87c2-acc7c5f40723,DISK]] 2023-07-21 11:16:52,633 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:52,634 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:16:52,634 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 11:16:52,634 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 11:16:52,634 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 11:16:52,634 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:52,634 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 11:16:52,634 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 11:16:52,636 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 11:16:52,637 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:52,637 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info 2023-07-21 11:16:52,638 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 11:16:52,647 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/3536ab124fb54a2fb8a540fbd6311b09 2023-07-21 11:16:52,652 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/5c902cb369004c06a80ca0785e879dc9 2023-07-21 11:16:52,659 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/beb74a5d244f4aa1a3f983de3a1805bc 2023-07-21 11:16:52,659 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:52,660 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 11:16:52,661 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:52,661 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier 2023-07-21 11:16:52,661 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 11:16:52,672 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce1c3c0335804360b6540dfdf53da436 2023-07-21 11:16:52,672 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier/ce1c3c0335804360b6540dfdf53da436 2023-07-21 11:16:52,681 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f8e5cb731248424f9ac24182335eb922 2023-07-21 11:16:52,681 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/rep_barrier/f8e5cb731248424f9ac24182335eb922 2023-07-21 11:16:52,681 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:52,682 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 11:16:52,683 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:52,683 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table 2023-07-21 11:16:52,683 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 11:16:52,692 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/176c58e30866445dac88d784f537577a 2023-07-21 11:16:52,708 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/table/4749bcea1e764757be2898f2ea93c5d8 2023-07-21 11:16:52,708 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:52,709 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:52,710 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740 2023-07-21 11:16:52,713 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 11:16:52,715 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 11:16:52,717 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=190; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9514200000, jitterRate=-0.11392107605934143}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 11:16:52,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 11:16:52,719 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=150, masterSystemTime=1689938212602 2023-07-21 11:16:52,720 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 11:16:52,722 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 11:16:52,732 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,38565,1689938211542, state=OPEN 2023-07-21 11:16:52,732 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 11:16:52,733 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 11:16:52,733 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 11:16:52,733 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 24889 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 11:16:52,733 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.HStore(1912): 1588230740/info is initiating minor compaction (all files) 2023-07-21 11:16:52,733 INFO [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/info in hbase:meta,,1.1588230740 2023-07-21 11:16:52,734 INFO [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/3536ab124fb54a2fb8a540fbd6311b09, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/5c902cb369004c06a80ca0785e879dc9, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/beb74a5d244f4aa1a3f983de3a1805bc] into tmpdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp, totalSize=24.3 K 2023-07-21 11:16:52,734 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] compactions.Compactor(207): Compacting 3536ab124fb54a2fb8a540fbd6311b09, keycount=28, bloomtype=NONE, size=8.0 K, encoding=NONE, compression=NONE, seqNum=154, earliestPutTs=1689938163260 2023-07-21 11:16:52,735 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=2bd94f497343684e2f5a451c6e430d4d, regionState=CLOSED 2023-07-21 11:16:52,735 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938212735"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938212735"}]},"ts":"1689938212735"} 2023-07-21 11:16:52,735 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] compactions.Compactor(207): Compacting 5c902cb369004c06a80ca0785e879dc9, keycount=26, bloomtype=NONE, size=7.7 K, encoding=NONE, compression=NONE, seqNum=171, earliestPutTs=1689938197611 2023-07-21 11:16:52,735 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=77ef890485c37098a66e3a9a030a0490, regionState=CLOSED 2023-07-21 11:16:52,735 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689938212735"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938212735"}]},"ts":"1689938212735"} 2023-07-21 11:16:52,736 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] compactions.Compactor(207): Compacting beb74a5d244f4aa1a3f983de3a1805bc, keycount=33, bloomtype=NONE, size=8.6 K, encoding=NONE, compression=NONE, seqNum=186, earliestPutTs=1689938210417 2023-07-21 11:16:52,736 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34931] ipc.CallRunner(144): callId: 60 service: ClientService methodName: Mutate size: 217 connection: 136.243.18.41:37198 deadline: 1689938272736, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=38565 startCode=1689938211542. As of locationSeqNum=186. 2023-07-21 11:16:52,737 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34931] ipc.CallRunner(144): callId: 61 service: ClientService methodName: Mutate size: 209 connection: 136.243.18.41:37198 deadline: 1689938272736, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=38565 startCode=1689938211542. As of locationSeqNum=186. 2023-07-21 11:16:52,739 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 11:16:52,757 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=144 2023-07-21 11:16:52,757 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=144, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,38565,1689938211542 in 288 msec 2023-07-21 11:16:52,760 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=144, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 684 msec 2023-07-21 11:16:52,777 INFO [RS:3;jenkins-hbase17:38565-shortCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#info#compaction#21 average throughput is 5.76 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 11:16:52,810 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/002bb99a33844116be8e0df75a599b24 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/002bb99a33844116be8e0df75a599b24 2023-07-21 11:16:52,819 INFO [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/info of 1588230740 into 002bb99a33844116be8e0df75a599b24(size=10.8 K), total size for store is 10.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 11:16:52,819 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-21 11:16:52,819 INFO [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/info, priority=13, startTime=1689938212719; duration=0sec 2023-07-21 11:16:52,819 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 11:16:52,845 DEBUG [PEWorker-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:52,846 INFO [RS-EventLoopGroup-17-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:42442, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:52,850 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=142 2023-07-21 11:16:52,851 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=142, state=SUCCESS; CloseRegionProcedure 2bd94f497343684e2f5a451c6e430d4d, server=jenkins-hbase17.apache.org,34931,1689938205269 in 772 msec 2023-07-21 11:16:52,851 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=143 2023-07-21 11:16:52,851 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=142, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,38565,1689938211542; forceNewPlan=false, retain=false 2023-07-21 11:16:52,851 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; CloseRegionProcedure 77ef890485c37098a66e3a9a030a0490, server=jenkins-hbase17.apache.org,34931,1689938205269 in 773 msec 2023-07-21 11:16:52,852 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=143, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=77ef890485c37098a66e3a9a030a0490, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,38565,1689938211542; forceNewPlan=false, retain=false 2023-07-21 11:16:52,852 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=2bd94f497343684e2f5a451c6e430d4d, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:52,852 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938212852"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938212852"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938212852"}]},"ts":"1689938212852"} 2023-07-21 11:16:52,853 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=77ef890485c37098a66e3a9a030a0490, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:52,853 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689938212853"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938212853"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938212853"}]},"ts":"1689938212853"} 2023-07-21 11:16:52,855 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=143, state=RUNNABLE; OpenRegionProcedure 77ef890485c37098a66e3a9a030a0490, server=jenkins-hbase17.apache.org,38565,1689938211542}] 2023-07-21 11:16:52,856 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=142, state=RUNNABLE; OpenRegionProcedure 2bd94f497343684e2f5a451c6e430d4d, server=jenkins-hbase17.apache.org,38565,1689938211542}] 2023-07-21 11:16:52,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:52,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2782e41606006289532e239f665ea4eb, disabling compactions & flushes 2023-07-21 11:16:52,891 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:52,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:52,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. after waiting 0 ms 2023-07-21 11:16:52,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:52,891 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 2782e41606006289532e239f665ea4eb 1/1 column families, dataSize=2.25 KB heapSize=3.77 KB 2023-07-21 11:16:52,909 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.25 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/292d403d79e94215b99a4768ef4ab0fa 2023-07-21 11:16:52,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 292d403d79e94215b99a4768ef4ab0fa 2023-07-21 11:16:52,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/292d403d79e94215b99a4768ef4ab0fa as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/292d403d79e94215b99a4768ef4ab0fa 2023-07-21 11:16:52,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 292d403d79e94215b99a4768ef4ab0fa 2023-07-21 11:16:52,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/292d403d79e94215b99a4768ef4ab0fa, entries=5, sequenceid=101, filesize=5.3 K 2023-07-21 11:16:52,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.25 KB/2306, heapSize ~3.76 KB/3848, currentSize=0 B/0 for 2782e41606006289532e239f665ea4eb in 37ms, sequenceid=101, compaction requested=true 2023-07-21 11:16:52,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=90 2023-07-21 11:16:52,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:52,943 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:52,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:52,943 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 2782e41606006289532e239f665ea4eb move to jenkins-hbase17.apache.org,38565,1689938211542 record at close sequenceid=101 2023-07-21 11:16:52,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:52,945 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=CLOSED 2023-07-21 11:16:52,945 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938212945"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689938212945"}]},"ts":"1689938212945"} 2023-07-21 11:16:52,947 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=145 2023-07-21 11:16:52,947 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=145, state=SUCCESS; CloseRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,35473,1689938205409 in 864 msec 2023-07-21 11:16:52,948 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=145, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,38565,1689938211542; forceNewPlan=false, retain=false 2023-07-21 11:16:53,026 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:53,026 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2bd94f497343684e2f5a451c6e430d4d, NAME => 'hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:53,026 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:53,026 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:53,026 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:53,026 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:53,027 INFO [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:53,028 DEBUG [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info 2023-07-21 11:16:53,028 DEBUG [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info 2023-07-21 11:16:53,029 INFO [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2bd94f497343684e2f5a451c6e430d4d columnFamilyName info 2023-07-21 11:16:53,037 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for db07fdd1032644e6999e588b237b5bc3 2023-07-21 11:16:53,037 DEBUG [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/info/db07fdd1032644e6999e588b237b5bc3 2023-07-21 11:16:53,037 INFO [StoreOpener-2bd94f497343684e2f5a451c6e430d4d-1] regionserver.HStore(310): Store=2bd94f497343684e2f5a451c6e430d4d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:53,038 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:53,039 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:53,043 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:53,043 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2bd94f497343684e2f5a451c6e430d4d; next sequenceid=21; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9993598720, jitterRate=-0.06927359104156494}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:53,044 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2bd94f497343684e2f5a451c6e430d4d: 2023-07-21 11:16:53,044 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d., pid=152, masterSystemTime=1689938213008 2023-07-21 11:16:53,049 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:53,050 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:53,050 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:53,050 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 77ef890485c37098a66e3a9a030a0490, NAME => 'hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:53,050 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=2bd94f497343684e2f5a451c6e430d4d, regionState=OPEN, openSeqNum=21, regionLocation=jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:53,050 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:53,050 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689938213050"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938213050"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938213050"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938213050"}]},"ts":"1689938213050"} 2023-07-21 11:16:53,050 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:53,050 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:53,050 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:53,053 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=142 2023-07-21 11:16:53,053 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=142, state=SUCCESS; OpenRegionProcedure 2bd94f497343684e2f5a451c6e430d4d, server=jenkins-hbase17.apache.org,38565,1689938211542 in 195 msec 2023-07-21 11:16:53,054 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2bd94f497343684e2f5a451c6e430d4d, REOPEN/MOVE in 989 msec 2023-07-21 11:16:53,055 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:53,055 DEBUG [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/q 2023-07-21 11:16:53,056 DEBUG [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/q 2023-07-21 11:16:53,056 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 77ef890485c37098a66e3a9a030a0490 columnFamilyName q 2023-07-21 11:16:53,056 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] regionserver.HStore(310): Store=77ef890485c37098a66e3a9a030a0490/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:53,057 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:53,057 DEBUG [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/u 2023-07-21 11:16:53,057 DEBUG [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/u 2023-07-21 11:16:53,058 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 77ef890485c37098a66e3a9a030a0490 columnFamilyName u 2023-07-21 11:16:53,058 INFO [StoreOpener-77ef890485c37098a66e3a9a030a0490-1] regionserver.HStore(310): Store=77ef890485c37098a66e3a9a030a0490/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:53,059 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:53,060 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:53,061 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-21 11:16:53,062 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:53,063 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 77ef890485c37098a66e3a9a030a0490; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10817338240, jitterRate=0.007443130016326904}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-21 11:16:53,063 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 77ef890485c37098a66e3a9a030a0490: 2023-07-21 11:16:53,064 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490., pid=151, masterSystemTime=1689938213008 2023-07-21 11:16:53,065 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:53,065 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:53,066 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=77ef890485c37098a66e3a9a030a0490, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:53,066 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689938213066"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938213066"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938213066"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938213066"}]},"ts":"1689938213066"} 2023-07-21 11:16:53,068 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=143 2023-07-21 11:16:53,069 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=143, state=SUCCESS; OpenRegionProcedure 77ef890485c37098a66e3a9a030a0490, server=jenkins-hbase17.apache.org,38565,1689938211542 in 213 msec 2023-07-21 11:16:53,069 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=77ef890485c37098a66e3a9a030a0490, REOPEN/MOVE in 1.0000 sec 2023-07-21 11:16:53,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] procedure.ProcedureSyncWait(216): waitFor pid=142 2023-07-21 11:16:53,098 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:53,098 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938213098"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689938213098"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689938213098"}]},"ts":"1689938213098"} 2023-07-21 11:16:53,100 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=145, state=RUNNABLE; OpenRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,38565,1689938211542}] 2023-07-21 11:16:53,255 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:53,255 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2782e41606006289532e239f665ea4eb, NAME => 'hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.', STARTKEY => '', ENDKEY => ''} 2023-07-21 11:16:53,255 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 11:16:53,255 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. service=MultiRowMutationService 2023-07-21 11:16:53,255 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 11:16:53,255 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:53,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 11:16:53,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:53,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:53,257 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:53,265 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m 2023-07-21 11:16:53,266 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m 2023-07-21 11:16:53,266 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2782e41606006289532e239f665ea4eb columnFamilyName m 2023-07-21 11:16:53,275 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 292d403d79e94215b99a4768ef4ab0fa 2023-07-21 11:16:53,278 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/292d403d79e94215b99a4768ef4ab0fa 2023-07-21 11:16:53,284 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/aeb270fc9f7943c29e25e4ef55952a60 2023-07-21 11:16:53,292 DEBUG [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/caeb8cb159f544518af404b183b96da3 2023-07-21 11:16:53,292 INFO [StoreOpener-2782e41606006289532e239f665ea4eb-1] regionserver.HStore(310): Store=2782e41606006289532e239f665ea4eb/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 11:16:53,293 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:53,294 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb 2023-07-21 11:16:53,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:53,299 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2782e41606006289532e239f665ea4eb; next sequenceid=105; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5933dd63, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 11:16:53,299 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:53,300 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb., pid=153, masterSystemTime=1689938213251 2023-07-21 11:16:53,300 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 11:16:53,302 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 11:16:53,303 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15777 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 11:16:53,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:53,303 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.HStore(1912): 2782e41606006289532e239f665ea4eb/m is initiating minor compaction (all files) 2023-07-21 11:16:53,303 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:53,303 INFO [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 2782e41606006289532e239f665ea4eb/m in hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:53,303 INFO [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/aeb270fc9f7943c29e25e4ef55952a60, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/caeb8cb159f544518af404b183b96da3, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/292d403d79e94215b99a4768ef4ab0fa] into tmpdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp, totalSize=15.4 K 2023-07-21 11:16:53,303 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=2782e41606006289532e239f665ea4eb, regionState=OPEN, openSeqNum=105, regionLocation=jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:53,304 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689938213303"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689938213303"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689938213303"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689938213303"}]},"ts":"1689938213303"} 2023-07-21 11:16:53,304 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] compactions.Compactor(207): Compacting aeb270fc9f7943c29e25e4ef55952a60, keycount=2, bloomtype=ROW, size=5.1 K, encoding=NONE, compression=NONE, seqNum=79, earliestPutTs=1689938188801 2023-07-21 11:16:53,304 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] compactions.Compactor(207): Compacting caeb8cb159f544518af404b183b96da3, keycount=2, bloomtype=ROW, size=5.0 K, encoding=NONE, compression=NONE, seqNum=87, earliestPutTs=1689938201890 2023-07-21 11:16:53,305 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] compactions.Compactor(207): Compacting 292d403d79e94215b99a4768ef4ab0fa, keycount=5, bloomtype=ROW, size=5.3 K, encoding=NONE, compression=NONE, seqNum=101, earliestPutTs=1689938212061 2023-07-21 11:16:53,307 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=145 2023-07-21 11:16:53,307 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=145, state=SUCCESS; OpenRegionProcedure 2782e41606006289532e239f665ea4eb, server=jenkins-hbase17.apache.org,38565,1689938211542 in 205 msec 2023-07-21 11:16:53,309 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=145, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2782e41606006289532e239f665ea4eb, REOPEN/MOVE in 1.2320 sec 2023-07-21 11:16:53,320 INFO [RS:3;jenkins-hbase17:38565-shortCompactions-0] throttle.PressureAwareThroughputController(145): 2782e41606006289532e239f665ea4eb#m#compaction#23 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 11:16:53,339 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/2d8045a530014477afb3190567f2f588 as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/2d8045a530014477afb3190567f2f588 2023-07-21 11:16:53,345 INFO [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 2782e41606006289532e239f665ea4eb/m of 2782e41606006289532e239f665ea4eb into 2d8045a530014477afb3190567f2f588(size=5.3 K), total size for store is 5.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 11:16:53,345 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:53,345 INFO [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb., storeName=2782e41606006289532e239f665ea4eb/m, priority=13, startTime=1689938213300; duration=0sec 2023-07-21 11:16:53,345 DEBUG [RS:3;jenkins-hbase17:38565-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 11:16:54,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] procedure.ProcedureSyncWait(216): waitFor pid=145 2023-07-21 11:16:54,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33343,1689938205105, jenkins-hbase17.apache.org,34931,1689938205269, jenkins-hbase17.apache.org,35473,1689938205409] are moved back to default 2023-07-21 11:16:54,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testClearDeadServers_1036591474 2023-07-21 11:16:54,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:54,078 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35473] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Scan size: 136 connection: 136.243.18.41:59938 deadline: 1689938274078, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=38565 startCode=1689938211542. As of locationSeqNum=101. 2023-07-21 11:16:54,184 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34931] ipc.CallRunner(144): callId: 5 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:58216 deadline: 1689938274184, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=38565 startCode=1689938211542. As of locationSeqNum=186. 2023-07-21 11:16:54,290 DEBUG [hconnection-0x3b9be96-shared-pool-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 11:16:54,292 INFO [RS-EventLoopGroup-17-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:42444, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 11:16:54,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:54,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:54,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testClearDeadServers_1036591474 2023-07-21 11:16:54,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:54,315 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 11:16:54,322 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:60590, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 11:16:54,323 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33343] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,33343,1689938205105' ***** 2023-07-21 11:16:54,323 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33343] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x5573a6dd 2023-07-21 11:16:54,323 INFO [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:54,329 INFO [RS:0;jenkins-hbase17:33343] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6f5782aa{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:54,331 INFO [RS:0;jenkins-hbase17:33343] server.AbstractConnector(383): Stopped ServerConnector@3e322de{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:54,332 INFO [RS:0;jenkins-hbase17:33343] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:54,334 INFO [RS:0;jenkins-hbase17:33343] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@30cceb92{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:54,336 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:54,336 INFO [RS:0;jenkins-hbase17:33343] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@43c8f9b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:54,340 INFO [RS:0;jenkins-hbase17:33343] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:54,340 INFO [RS:0;jenkins-hbase17:33343] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:54,341 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:54,341 INFO [RS:0;jenkins-hbase17:33343] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:54,341 INFO [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:54,341 DEBUG [RS:0;jenkins-hbase17:33343] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2f4c2130 to 127.0.0.1:61077 2023-07-21 11:16:54,341 DEBUG [RS:0;jenkins-hbase17:33343] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:54,341 INFO [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,33343,1689938205105; all regions closed. 2023-07-21 11:16:54,358 DEBUG [RS:0;jenkins-hbase17:33343] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:54,358 INFO [RS:0;jenkins-hbase17:33343] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C33343%2C1689938205105:(num 1689938206155) 2023-07-21 11:16:54,358 DEBUG [RS:0;jenkins-hbase17:33343] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:54,358 INFO [RS:0;jenkins-hbase17:33343] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:54,358 INFO [RS:0;jenkins-hbase17:33343] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:54,359 INFO [RS:0;jenkins-hbase17:33343] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:54,359 INFO [RS:0;jenkins-hbase17:33343] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:54,359 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:54,359 INFO [RS:0;jenkins-hbase17:33343] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:54,361 INFO [RS:0;jenkins-hbase17:33343] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:33343 2023-07-21 11:16:54,394 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:54,445 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:54,445 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:54,445 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:54,445 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:54,445 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:54,445 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,33343,1689938205105] 2023-07-21 11:16:54,446 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,33343,1689938205105; numProcessing=1 2023-07-21 11:16:54,446 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:54,446 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:54,447 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:54,447 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 znode expired, triggering replicatorRemoved event 2023-07-21 11:16:54,447 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:54,448 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:54,448 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:54,448 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,33343,1689938205105 already deleted, retry=false 2023-07-21 11:16:54,448 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,33343,1689938205105 on jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:54,449 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=154, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,33343,1689938205105, splitWal=true, meta=false 2023-07-21 11:16:54,449 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=154 for jenkins-hbase17.apache.org,33343,1689938205105 (carryingMeta=false) jenkins-hbase17.apache.org,33343,1689938205105/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@501ee674[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 11:16:54,452 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:54,452 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:54,452 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:54,453 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 2023-07-21 11:16:54,453 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:54,453 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=154, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,33343,1689938205105, splitWal=true, meta=false 2023-07-21 11:16:54,453 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35473] ipc.CallRunner(144): callId: 74 service: ClientService methodName: ExecService size: 578 connection: 136.243.18.41:59936 deadline: 1689938274453, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=38565 startCode=1689938211542. As of locationSeqNum=101. 2023-07-21 11:16:54,454 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,33343,1689938205105 had 0 regions 2023-07-21 11:16:54,461 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=154, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,33343,1689938205105, splitWal=true, meta=false, isMeta: false 2023-07-21 11:16:54,467 DEBUG [PEWorker-2] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,33343,1689938205105-splitting 2023-07-21 11:16:54,468 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,33343,1689938205105-splitting dir is empty, no logs to split. 2023-07-21 11:16:54,468 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase17.apache.org,33343,1689938205105 WAL count=0, meta=false 2023-07-21 11:16:54,476 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,33343,1689938205105-splitting dir is empty, no logs to split. 2023-07-21 11:16:54,476 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase17.apache.org,33343,1689938205105 WAL count=0, meta=false 2023-07-21 11:16:54,476 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,33343,1689938205105 WAL splitting is done? wals=0, meta=false 2023-07-21 11:16:54,479 INFO [PEWorker-2] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,33343,1689938205105 failed, ignore...File hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,33343,1689938205105-splitting does not exist. 2023-07-21 11:16:54,481 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,33343,1689938205105 after splitting done 2023-07-21 11:16:54,481 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase17.apache.org,33343,1689938205105 from processing; numProcessing=0 2023-07-21 11:16:54,482 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=154, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,33343,1689938205105, splitWal=true, meta=false in 33 msec 2023-07-21 11:16:54,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(2362): Client=jenkins//136.243.18.41 clear dead region servers. 2023-07-21 11:16:54,556 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:54,556 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:33343-0x10187975688001d, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:54,556 INFO [RS:0;jenkins-hbase17:33343] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,33343,1689938205105; zookeeper connection closed. 2023-07-21 11:16:54,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:54,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:54,557 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2b88defd] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2b88defd 2023-07-21 11:16:54,557 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:54,557 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:54,557 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:54,557 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:54,557 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 znode expired, triggering replicatorRemoved event 2023-07-21 11:16:54,557 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase17.apache.org,33343,1689938205105 znode expired, triggering replicatorRemoved event 2023-07-21 11:16:54,558 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:54,558 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:54,558 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:54,558 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:54,559 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:54,559 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:54,563 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:54,564 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:54,564 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1036591474 2023-07-21 11:16:54,564 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:54,570 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 11:16:54,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:54,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:54,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1036591474 2023-07-21 11:16:54,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 11:16:54,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(609): Remove decommissioned servers [jenkins-hbase17.apache.org:33343] from RSGroup done 2023-07-21 11:16:54,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testClearDeadServers_1036591474 2023-07-21 11:16:54,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:54,585 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34931] ipc.CallRunner(144): callId: 78 service: ClientService methodName: Scan size: 146 connection: 136.243.18.41:37198 deadline: 1689938274585, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=38565 startCode=1689938211542. As of locationSeqNum=18. 2023-07-21 11:16:54,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:54,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:54,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:54,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:54,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:54,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 11:16:54,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:54,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 11:16:54,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:54,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1036591474 2023-07-21 11:16:54,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 11:16:54,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:54,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 11:16:54,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 11:16:54,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 11:16:54,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:34931, jenkins-hbase17.apache.org:35473] to rsgroup default 2023-07-21 11:16:54,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:54,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1036591474 2023-07-21 11:16:54,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:54,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testClearDeadServers_1036591474, current retry=0 2023-07-21 11:16:54,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,34931,1689938205269, jenkins-hbase17.apache.org,35473,1689938205409] are moved back to Group_testClearDeadServers_1036591474 2023-07-21 11:16:54,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testClearDeadServers_1036591474 => default 2023-07-21 11:16:54,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 11:16:54,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_testClearDeadServers_1036591474 2023-07-21 11:16:54,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:54,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 11:16:54,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 11:16:54,719 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 11:16:54,731 INFO [Listener at localhost.localdomain/33557] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 11:16:54,731 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:54,731 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:54,731 INFO [Listener at localhost.localdomain/33557] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 11:16:54,731 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 11:16:54,731 INFO [Listener at localhost.localdomain/33557] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 11:16:54,731 INFO [Listener at localhost.localdomain/33557] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 11:16:54,733 INFO [Listener at localhost.localdomain/33557] ipc.NettyRpcServer(120): Bind to /136.243.18.41:38965 2023-07-21 11:16:54,733 INFO [Listener at localhost.localdomain/33557] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 11:16:54,743 DEBUG [Listener at localhost.localdomain/33557] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 11:16:54,744 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:54,745 INFO [Listener at localhost.localdomain/33557] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 11:16:54,746 INFO [Listener at localhost.localdomain/33557] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38965 connecting to ZooKeeper ensemble=127.0.0.1:61077 2023-07-21 11:16:54,750 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:389650x0, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 11:16:54,755 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38965-0x10187975688002a connected 2023-07-21 11:16:54,755 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(162): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 11:16:54,756 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(162): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 11:16:54,757 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ZKUtil(164): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 11:16:54,764 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38965 2023-07-21 11:16:54,768 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38965 2023-07-21 11:16:54,768 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38965 2023-07-21 11:16:54,770 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38965 2023-07-21 11:16:54,770 DEBUG [Listener at localhost.localdomain/33557] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38965 2023-07-21 11:16:54,773 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 11:16:54,773 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 11:16:54,773 INFO [Listener at localhost.localdomain/33557] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 11:16:54,774 INFO [Listener at localhost.localdomain/33557] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 11:16:54,774 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 11:16:54,774 INFO [Listener at localhost.localdomain/33557] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 11:16:54,774 INFO [Listener at localhost.localdomain/33557] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 11:16:54,775 INFO [Listener at localhost.localdomain/33557] http.HttpServer(1146): Jetty bound to port 45713 2023-07-21 11:16:54,775 INFO [Listener at localhost.localdomain/33557] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 11:16:54,812 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:54,813 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3835f95c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,AVAILABLE} 2023-07-21 11:16:54,813 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:54,813 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5e8f751c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 11:16:54,937 INFO [Listener at localhost.localdomain/33557] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 11:16:54,938 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 11:16:54,938 INFO [Listener at localhost.localdomain/33557] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 11:16:54,938 INFO [Listener at localhost.localdomain/33557] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 11:16:54,939 INFO [Listener at localhost.localdomain/33557] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 11:16:54,940 INFO [Listener at localhost.localdomain/33557] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@149d3e36{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/java.io.tmpdir/jetty-0_0_0_0-45713-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5684727423127813819/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:54,943 INFO [Listener at localhost.localdomain/33557] server.AbstractConnector(333): Started ServerConnector@602a8929{HTTP/1.1, (http/1.1)}{0.0.0.0:45713} 2023-07-21 11:16:54,943 INFO [Listener at localhost.localdomain/33557] server.Server(415): Started @63926ms 2023-07-21 11:16:54,952 INFO [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer(951): ClusterId : 93849ffe-6088-40b5-9569-fd892bfff1c2 2023-07-21 11:16:54,954 DEBUG [RS:4;jenkins-hbase17:38965] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 11:16:54,956 DEBUG [RS:4;jenkins-hbase17:38965] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 11:16:54,956 DEBUG [RS:4;jenkins-hbase17:38965] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 11:16:54,959 DEBUG [RS:4;jenkins-hbase17:38965] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 11:16:54,962 DEBUG [RS:4;jenkins-hbase17:38965] zookeeper.ReadOnlyZKClient(139): Connect 0x038a4cc5 to 127.0.0.1:61077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 11:16:54,997 DEBUG [RS:4;jenkins-hbase17:38965] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d4da2c4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 11:16:54,997 DEBUG [RS:4;jenkins-hbase17:38965] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@97227b5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:55,006 DEBUG [RS:4;jenkins-hbase17:38965] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase17:38965 2023-07-21 11:16:55,006 INFO [RS:4;jenkins-hbase17:38965] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 11:16:55,006 INFO [RS:4;jenkins-hbase17:38965] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 11:16:55,006 DEBUG [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 11:16:55,007 INFO [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,38633,1689938204808 with isa=jenkins-hbase17.apache.org/136.243.18.41:38965, startcode=1689938214730 2023-07-21 11:16:55,007 DEBUG [RS:4;jenkins-hbase17:38965] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 11:16:55,009 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34971, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.12 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 11:16:55,010 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38633] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,010 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 11:16:55,010 DEBUG [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae 2023-07-21 11:16:55,010 DEBUG [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36511 2023-07-21 11:16:55,010 DEBUG [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40421 2023-07-21 11:16:55,012 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:55,012 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:55,012 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:55,012 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:55,012 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,38965,1689938214730] 2023-07-21 11:16:55,012 DEBUG [RS:4;jenkins-hbase17:38965] zookeeper.ZKUtil(162): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,013 WARN [RS:4;jenkins-hbase17:38965] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 11:16:55,013 INFO [RS:4;jenkins-hbase17:38965] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 11:16:55,013 DEBUG [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/WALs/jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,015 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:55,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:55,015 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 11:16:55,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:55,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:55,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:55,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:55,016 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,38633,1689938204808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 11:16:55,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:55,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:55,017 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:55,017 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:55,022 DEBUG [RS:4;jenkins-hbase17:38965] zookeeper.ZKUtil(162): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,022 DEBUG [RS:4;jenkins-hbase17:38965] zookeeper.ZKUtil(162): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:55,022 DEBUG [RS:4;jenkins-hbase17:38965] zookeeper.ZKUtil(162): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:55,022 DEBUG [RS:4;jenkins-hbase17:38965] zookeeper.ZKUtil(162): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:55,024 DEBUG [RS:4;jenkins-hbase17:38965] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 11:16:55,024 INFO [RS:4;jenkins-hbase17:38965] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 11:16:55,030 INFO [RS:4;jenkins-hbase17:38965] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 11:16:55,032 INFO [RS:4;jenkins-hbase17:38965] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 11:16:55,033 INFO [RS:4;jenkins-hbase17:38965] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:55,036 INFO [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 11:16:55,038 INFO [RS:4;jenkins-hbase17:38965] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:55,038 DEBUG [RS:4;jenkins-hbase17:38965] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:55,038 DEBUG [RS:4;jenkins-hbase17:38965] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:55,038 DEBUG [RS:4;jenkins-hbase17:38965] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:55,038 DEBUG [RS:4;jenkins-hbase17:38965] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:55,038 DEBUG [RS:4;jenkins-hbase17:38965] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:55,038 DEBUG [RS:4;jenkins-hbase17:38965] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 11:16:55,038 DEBUG [RS:4;jenkins-hbase17:38965] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:55,038 DEBUG [RS:4;jenkins-hbase17:38965] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:55,038 DEBUG [RS:4;jenkins-hbase17:38965] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:55,039 DEBUG [RS:4;jenkins-hbase17:38965] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 11:16:55,044 INFO [RS:4;jenkins-hbase17:38965] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:55,044 INFO [RS:4;jenkins-hbase17:38965] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:55,045 INFO [RS:4;jenkins-hbase17:38965] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:55,055 INFO [RS:4;jenkins-hbase17:38965] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 11:16:55,056 INFO [RS:4;jenkins-hbase17:38965] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38965,1689938214730-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 11:16:55,068 INFO [RS:4;jenkins-hbase17:38965] regionserver.Replication(203): jenkins-hbase17.apache.org,38965,1689938214730 started 2023-07-21 11:16:55,068 INFO [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,38965,1689938214730, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:38965, sessionid=0x10187975688002a 2023-07-21 11:16:55,068 DEBUG [RS:4;jenkins-hbase17:38965] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 11:16:55,068 DEBUG [RS:4;jenkins-hbase17:38965] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,068 DEBUG [RS:4;jenkins-hbase17:38965] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38965,1689938214730' 2023-07-21 11:16:55,068 DEBUG [RS:4;jenkins-hbase17:38965] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 11:16:55,069 DEBUG [RS:4;jenkins-hbase17:38965] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 11:16:55,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 11:16:55,069 DEBUG [RS:4;jenkins-hbase17:38965] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 11:16:55,070 DEBUG [RS:4;jenkins-hbase17:38965] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 11:16:55,070 DEBUG [RS:4;jenkins-hbase17:38965] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,070 DEBUG [RS:4;jenkins-hbase17:38965] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38965,1689938214730' 2023-07-21 11:16:55,070 DEBUG [RS:4;jenkins-hbase17:38965] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 11:16:55,070 DEBUG [RS:4;jenkins-hbase17:38965] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 11:16:55,071 DEBUG [RS:4;jenkins-hbase17:38965] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 11:16:55,071 INFO [RS:4;jenkins-hbase17:38965] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 11:16:55,071 INFO [RS:4;jenkins-hbase17:38965] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 11:16:55,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 11:16:55,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 11:16:55,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 11:16:55,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 11:16:55,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:55,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:55,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:38633] to rsgroup master 2023-07-21 11:16:55,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:38633 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 11:16:55,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] ipc.CallRunner(144): callId: 104 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:38532 deadline: 1689939415079, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:38633 is either offline or it does not exist. 2023-07-21 11:16:55,079 WARN [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:38633 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor64.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:38633 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 11:16:55,084 INFO [Listener at localhost.localdomain/33557] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 11:16:55,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 11:16:55,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 11:16:55,086 INFO [Listener at localhost.localdomain/33557] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:34931, jenkins-hbase17.apache.org:35473, jenkins-hbase17.apache.org:38565, jenkins-hbase17.apache.org:38965], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 11:16:55,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 11:16:55,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38633] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 11:16:55,113 INFO [Listener at localhost.localdomain/33557] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=581 (was 562) - Thread LEAK? -, OpenFileDescriptor=929 (was 844) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=684 (was 727), ProcessCount=186 (was 186), AvailableMemoryMB=2179 (was 2554) 2023-07-21 11:16:55,113 WARN [Listener at localhost.localdomain/33557] hbase.ResourceChecker(130): Thread=581 is superior to 500 2023-07-21 11:16:55,114 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 11:16:55,114 INFO [Listener at localhost.localdomain/33557] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 11:16:55,114 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7ed9d4df to 127.0.0.1:61077 2023-07-21 11:16:55,114 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:55,114 DEBUG [Listener at localhost.localdomain/33557] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 11:16:55,114 DEBUG [Listener at localhost.localdomain/33557] util.JVMClusterUtil(257): Found active master hash=1195358716, stopped=false 2023-07-21 11:16:55,114 DEBUG [Listener at localhost.localdomain/33557] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 11:16:55,114 DEBUG [Listener at localhost.localdomain/33557] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 11:16:55,114 INFO [Listener at localhost.localdomain/33557] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:55,115 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:55,115 INFO [Listener at localhost.localdomain/33557] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 11:16:55,116 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:55,116 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:55,116 DEBUG [Listener at localhost.localdomain/33557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5e6dce7c to 127.0.0.1:61077 2023-07-21 11:16:55,117 DEBUG [Listener at localhost.localdomain/33557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:55,117 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,34931,1689938205269' ***** 2023-07-21 11:16:55,118 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:16:55,118 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,35473,1689938205409' ***** 2023-07-21 11:16:55,118 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:16:55,118 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,38565,1689938211542' ***** 2023-07-21 11:16:55,118 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:16:55,118 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,38965,1689938214730' ***** 2023-07-21 11:16:55,118 INFO [Listener at localhost.localdomain/33557] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 11:16:55,118 INFO [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:55,118 INFO [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:55,118 INFO [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:55,118 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:55,118 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:55,118 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:55,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:55,115 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:55,116 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 11:16:55,123 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:55,124 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:55,124 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 11:16:55,124 INFO [RS:2;jenkins-hbase17:35473] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3f207495{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:55,124 INFO [RS:1;jenkins-hbase17:34931] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@55e2133f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:55,124 INFO [RS:3;jenkins-hbase17:38565] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4b965b66{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:55,125 INFO [RS:2;jenkins-hbase17:35473] server.AbstractConnector(383): Stopped ServerConnector@212a25a6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:55,125 INFO [RS:3;jenkins-hbase17:38565] server.AbstractConnector(383): Stopped ServerConnector@55d5015d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:55,125 INFO [RS:2;jenkins-hbase17:35473] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:55,125 INFO [RS:3;jenkins-hbase17:38565] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:55,125 INFO [RS:1;jenkins-hbase17:34931] server.AbstractConnector(383): Stopped ServerConnector@2cc75ae4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:55,126 INFO [RS:1;jenkins-hbase17:34931] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:55,129 INFO [RS:2;jenkins-hbase17:35473] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1a20ea9e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:55,129 INFO [RS:3;jenkins-hbase17:38565] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@14185deb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:55,129 INFO [RS:1;jenkins-hbase17:34931] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3e013486{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:55,131 INFO [RS:2;jenkins-hbase17:35473] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@674a6b4a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:55,133 INFO [RS:1;jenkins-hbase17:34931] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5874c5e3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:55,132 INFO [RS:3;jenkins-hbase17:38565] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@288c3061{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:55,132 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:55,132 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:55,133 INFO [RS:4;jenkins-hbase17:38965] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@149d3e36{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 11:16:55,134 INFO [RS:4;jenkins-hbase17:38965] server.AbstractConnector(383): Stopped ServerConnector@602a8929{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:55,134 INFO [RS:4;jenkins-hbase17:38965] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:55,134 INFO [RS:2;jenkins-hbase17:35473] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:55,134 INFO [RS:1;jenkins-hbase17:34931] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:55,134 INFO [RS:2;jenkins-hbase17:35473] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:55,134 INFO [RS:3;jenkins-hbase17:38565] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:55,134 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:55,134 INFO [RS:2;jenkins-hbase17:35473] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:55,136 INFO [RS:4;jenkins-hbase17:38965] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5e8f751c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:55,136 INFO [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:55,134 INFO [RS:1;jenkins-hbase17:34931] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:55,137 DEBUG [RS:2;jenkins-hbase17:35473] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x34e57acf to 127.0.0.1:61077 2023-07-21 11:16:55,137 INFO [RS:3;jenkins-hbase17:38565] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:55,137 INFO [RS:4;jenkins-hbase17:38965] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3835f95c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:55,137 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:55,137 INFO [RS:3;jenkins-hbase17:38565] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:55,137 DEBUG [RS:2;jenkins-hbase17:35473] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:55,137 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(3305): Received CLOSE for 2bd94f497343684e2f5a451c6e430d4d 2023-07-21 11:16:55,137 INFO [RS:1;jenkins-hbase17:34931] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:55,138 INFO [RS:4;jenkins-hbase17:38965] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 11:16:55,138 INFO [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:55,137 INFO [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,35473,1689938205409; all regions closed. 2023-07-21 11:16:55,138 DEBUG [RS:1;jenkins-hbase17:34931] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x773c5dbd to 127.0.0.1:61077 2023-07-21 11:16:55,138 DEBUG [RS:1;jenkins-hbase17:34931] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:55,138 INFO [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,34931,1689938205269; all regions closed. 2023-07-21 11:16:55,138 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(3305): Received CLOSE for 2782e41606006289532e239f665ea4eb 2023-07-21 11:16:55,138 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(3305): Received CLOSE for 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:55,138 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:55,138 DEBUG [RS:3;jenkins-hbase17:38565] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6b5cf2fc to 127.0.0.1:61077 2023-07-21 11:16:55,138 DEBUG [RS:3;jenkins-hbase17:38565] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:55,138 INFO [RS:3;jenkins-hbase17:38565] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:55,138 INFO [RS:3;jenkins-hbase17:38565] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:55,138 INFO [RS:3;jenkins-hbase17:38565] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:55,138 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 11:16:55,144 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 11:16:55,144 INFO [RS:4;jenkins-hbase17:38965] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 11:16:55,144 INFO [RS:4;jenkins-hbase17:38965] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 11:16:55,144 INFO [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,144 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-21 11:16:55,145 DEBUG [RS:4;jenkins-hbase17:38965] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x038a4cc5 to 127.0.0.1:61077 2023-07-21 11:16:55,145 DEBUG [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1478): Online Regions={2bd94f497343684e2f5a451c6e430d4d=hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d., 1588230740=hbase:meta,,1.1588230740, 2782e41606006289532e239f665ea4eb=hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb., 77ef890485c37098a66e3a9a030a0490=hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490.} 2023-07-21 11:16:55,146 DEBUG [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1504): Waiting on 1588230740, 2782e41606006289532e239f665ea4eb, 2bd94f497343684e2f5a451c6e430d4d, 77ef890485c37098a66e3a9a030a0490 2023-07-21 11:16:55,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2bd94f497343684e2f5a451c6e430d4d, disabling compactions & flushes 2023-07-21 11:16:55,146 DEBUG [RS:4;jenkins-hbase17:38965] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:55,146 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 11:16:55,146 INFO [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,38965,1689938214730; all regions closed. 2023-07-21 11:16:55,146 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 11:16:55,146 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:55,146 DEBUG [RS:4;jenkins-hbase17:38965] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:55,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:55,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. after waiting 0 ms 2023-07-21 11:16:55,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:55,146 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 11:16:55,147 INFO [RS:4;jenkins-hbase17:38965] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:55,147 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 11:16:55,147 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 11:16:55,147 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.43 KB heapSize=6.39 KB 2023-07-21 11:16:55,149 INFO [RS:4;jenkins-hbase17:38965] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:55,149 INFO [RS:4;jenkins-hbase17:38965] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:55,150 INFO [RS:4;jenkins-hbase17:38965] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:55,150 INFO [RS:4;jenkins-hbase17:38965] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:55,151 INFO [RS:4;jenkins-hbase17:38965] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:38965 2023-07-21 11:16:55,151 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:55,152 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:55,166 DEBUG [RS:2;jenkins-hbase17:35473] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:55,166 INFO [RS:2;jenkins-hbase17:35473] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C35473%2C1689938205409:(num 1689938206148) 2023-07-21 11:16:55,166 DEBUG [RS:2;jenkins-hbase17:35473] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:55,166 INFO [RS:2;jenkins-hbase17:35473] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:55,166 INFO [RS:2;jenkins-hbase17:35473] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:55,166 INFO [RS:2;jenkins-hbase17:35473] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:55,166 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:55,166 INFO [RS:2;jenkins-hbase17:35473] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:55,166 INFO [RS:2;jenkins-hbase17:35473] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:55,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/namespace/2bd94f497343684e2f5a451c6e430d4d/recovered.edits/23.seqid, newMaxSeqId=23, maxSeqId=20 2023-07-21 11:16:55,171 DEBUG [RS:1;jenkins-hbase17:34931] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:55,171 INFO [RS:1;jenkins-hbase17:34931] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C34931%2C1689938205269.meta:.meta(num 1689938206291) 2023-07-21 11:16:55,171 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:55,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2bd94f497343684e2f5a451c6e430d4d: 2023-07-21 11:16:55,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689938162856.2bd94f497343684e2f5a451c6e430d4d. 2023-07-21 11:16:55,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2782e41606006289532e239f665ea4eb, disabling compactions & flushes 2023-07-21 11:16:55,171 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:55,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:55,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. after waiting 1 ms 2023-07-21 11:16:55,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:55,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 2782e41606006289532e239f665ea4eb 1/1 column families, dataSize=2.10 KB heapSize=3.54 KB 2023-07-21 11:16:55,177 INFO [RS:2;jenkins-hbase17:35473] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:35473 2023-07-21 11:16:55,179 DEBUG [RS:1;jenkins-hbase17:34931] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:55,179 INFO [RS:1;jenkins-hbase17:34931] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C34931%2C1689938205269:(num 1689938206171) 2023-07-21 11:16:55,179 DEBUG [RS:1;jenkins-hbase17:34931] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:55,179 INFO [RS:1;jenkins-hbase17:34931] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:55,179 INFO [RS:1;jenkins-hbase17:34931] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:55,179 INFO [RS:1;jenkins-hbase17:34931] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 11:16:55,179 INFO [RS:1;jenkins-hbase17:34931] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 11:16:55,179 INFO [RS:1;jenkins-hbase17:34931] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 11:16:55,180 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:55,202 INFO [RS:1;jenkins-hbase17:34931] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:34931 2023-07-21 11:16:55,208 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:55,215 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:55,218 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.43 KB at sequenceid=202 (bloomFilter=false), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/2d80bb285720476c89bfb9fb49327deb 2023-07-21 11:16:55,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.10 KB at sequenceid=115 (bloomFilter=true), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/1f09c1b25f444f4bb2b11208978b95af 2023-07-21 11:16:55,226 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/.tmp/info/2d80bb285720476c89bfb9fb49327deb as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/2d80bb285720476c89bfb9fb49327deb 2023-07-21 11:16:55,240 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:55,240 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:55,240 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:55,240 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:55,240 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,240 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:55,241 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:55,241 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,241 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:55,241 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35473,1689938205409 2023-07-21 11:16:55,241 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:55,241 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,241 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:55,240 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:55,240 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:55,241 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38965,1689938214730 2023-07-21 11:16:55,241 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34931,1689938205269 2023-07-21 11:16:55,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1f09c1b25f444f4bb2b11208978b95af 2023-07-21 11:16:55,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/.tmp/m/1f09c1b25f444f4bb2b11208978b95af as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/1f09c1b25f444f4bb2b11208978b95af 2023-07-21 11:16:55,251 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/2d80bb285720476c89bfb9fb49327deb, entries=30, sequenceid=202, filesize=8.2 K 2023-07-21 11:16:55,253 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.43 KB/3516, heapSize ~5.88 KB/6016, currentSize=0 B/0 for 1588230740 in 106ms, sequenceid=202, compaction requested=false 2023-07-21 11:16:55,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1f09c1b25f444f4bb2b11208978b95af 2023-07-21 11:16:55,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/1f09c1b25f444f4bb2b11208978b95af, entries=4, sequenceid=115, filesize=5.3 K 2023-07-21 11:16:55,265 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.10 KB/2150, heapSize ~3.52 KB/3608, currentSize=0 B/0 for 2782e41606006289532e239f665ea4eb in 93ms, sequenceid=115, compaction requested=false 2023-07-21 11:16:55,272 DEBUG [StoreCloser-hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/aeb270fc9f7943c29e25e4ef55952a60, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/caeb8cb159f544518af404b183b96da3, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/292d403d79e94215b99a4768ef4ab0fa] to archive 2023-07-21 11:16:55,273 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/3536ab124fb54a2fb8a540fbd6311b09, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/5c902cb369004c06a80ca0785e879dc9, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/beb74a5d244f4aa1a3f983de3a1805bc] to archive 2023-07-21 11:16:55,273 DEBUG [StoreCloser-hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-21 11:16:55,284 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-21 11:16:55,286 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/3536ab124fb54a2fb8a540fbd6311b09 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/meta/1588230740/info/3536ab124fb54a2fb8a540fbd6311b09 2023-07-21 11:16:55,287 DEBUG [StoreCloser-hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/aeb270fc9f7943c29e25e4ef55952a60 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/aeb270fc9f7943c29e25e4ef55952a60 2023-07-21 11:16:55,287 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/5c902cb369004c06a80ca0785e879dc9 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/meta/1588230740/info/5c902cb369004c06a80ca0785e879dc9 2023-07-21 11:16:55,289 DEBUG [StoreCloser-hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/caeb8cb159f544518af404b183b96da3 to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/caeb8cb159f544518af404b183b96da3 2023-07-21 11:16:55,289 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/info/beb74a5d244f4aa1a3f983de3a1805bc to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/meta/1588230740/info/beb74a5d244f4aa1a3f983de3a1805bc 2023-07-21 11:16:55,290 DEBUG [StoreCloser-hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/292d403d79e94215b99a4768ef4ab0fa to hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/archive/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/m/292d403d79e94215b99a4768ef4ab0fa 2023-07-21 11:16:55,309 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/rsgroup/2782e41606006289532e239f665ea4eb/recovered.edits/118.seqid, newMaxSeqId=118, maxSeqId=104 2023-07-21 11:16:55,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:55,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:55,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2782e41606006289532e239f665ea4eb: 2023-07-21 11:16:55,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689938162705.2782e41606006289532e239f665ea4eb. 2023-07-21 11:16:55,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 77ef890485c37098a66e3a9a030a0490, disabling compactions & flushes 2023-07-21 11:16:55,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:55,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:55,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. after waiting 0 ms 2023-07-21 11:16:55,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:55,315 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/meta/1588230740/recovered.edits/205.seqid, newMaxSeqId=205, maxSeqId=189 2023-07-21 11:16:55,316 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 11:16:55,316 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 11:16:55,316 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 11:16:55,316 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 11:16:55,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/data/hbase/quota/77ef890485c37098a66e3a9a030a0490/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 11:16:55,326 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:55,326 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 77ef890485c37098a66e3a9a030a0490: 2023-07-21 11:16:55,326 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689938198608.77ef890485c37098a66e3a9a030a0490. 2023-07-21 11:16:55,346 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,38565,1689938211542; all regions closed. 2023-07-21 11:16:55,351 DEBUG [RS:3;jenkins-hbase17:38565] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:55,351 INFO [RS:3;jenkins-hbase17:38565] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C38565%2C1689938211542.meta:.meta(num 1689938212610) 2023-07-21 11:16:55,358 DEBUG [RS:3;jenkins-hbase17:38565] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/oldWALs 2023-07-21 11:16:55,358 INFO [RS:3;jenkins-hbase17:38565] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C38565%2C1689938211542:(num 1689938211949) 2023-07-21 11:16:55,358 DEBUG [RS:3;jenkins-hbase17:38565] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:55,358 INFO [RS:3;jenkins-hbase17:38565] regionserver.LeaseManager(133): Closed leases 2023-07-21 11:16:55,359 INFO [RS:3;jenkins-hbase17:38565] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 11:16:55,359 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:55,360 INFO [RS:3;jenkins-hbase17:38565] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:38565 2023-07-21 11:16:55,368 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,38965,1689938214730] 2023-07-21 11:16:55,368 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,38965,1689938214730; numProcessing=1 2023-07-21 11:16:55,372 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 11:16:55,372 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38565,1689938211542 2023-07-21 11:16:55,373 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,38965,1689938214730 already deleted, retry=false 2023-07-21 11:16:55,373 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,38965,1689938214730 expired; onlineServers=3 2023-07-21 11:16:55,373 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,34931,1689938205269] 2023-07-21 11:16:55,373 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,34931,1689938205269; numProcessing=2 2023-07-21 11:16:55,374 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,34931,1689938205269 already deleted, retry=false 2023-07-21 11:16:55,374 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,34931,1689938205269 expired; onlineServers=2 2023-07-21 11:16:55,374 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,35473,1689938205409] 2023-07-21 11:16:55,374 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,35473,1689938205409; numProcessing=3 2023-07-21 11:16:55,375 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,35473,1689938205409 already deleted, retry=false 2023-07-21 11:16:55,375 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,35473,1689938205409 expired; onlineServers=1 2023-07-21 11:16:55,375 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,38565,1689938211542] 2023-07-21 11:16:55,375 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,38565,1689938211542; numProcessing=4 2023-07-21 11:16:55,376 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,38565,1689938211542 already deleted, retry=false 2023-07-21 11:16:55,376 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,38565,1689938211542 expired; onlineServers=0 2023-07-21 11:16:55,376 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,38633,1689938204808' ***** 2023-07-21 11:16:55,376 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 11:16:55,376 DEBUG [M:0;jenkins-hbase17:38633] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@430035c3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 11:16:55,376 INFO [M:0;jenkins-hbase17:38633] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 11:16:55,383 INFO [M:0;jenkins-hbase17:38633] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5357b32e{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 11:16:55,383 INFO [M:0;jenkins-hbase17:38633] server.AbstractConnector(383): Stopped ServerConnector@1907347{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:55,383 INFO [M:0;jenkins-hbase17:38633] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 11:16:55,385 INFO [M:0;jenkins-hbase17:38633] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7d48f97{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 11:16:55,386 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 11:16:55,386 INFO [M:0;jenkins-hbase17:38633] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10d43f4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/hadoop.log.dir/,STOPPED} 2023-07-21 11:16:55,386 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 11:16:55,386 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 11:16:55,386 INFO [M:0;jenkins-hbase17:38633] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,38633,1689938204808 2023-07-21 11:16:55,386 INFO [M:0;jenkins-hbase17:38633] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,38633,1689938204808; all regions closed. 2023-07-21 11:16:55,386 DEBUG [M:0;jenkins-hbase17:38633] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:55,386 INFO [M:0;jenkins-hbase17:38633] master.HMaster(1491): Stopping master jetty server 2023-07-21 11:16:55,392 INFO [M:0;jenkins-hbase17:38633] server.AbstractConnector(383): Stopped ServerConnector@50b8409{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 11:16:55,395 DEBUG [M:0;jenkins-hbase17:38633] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 11:16:55,395 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 11:16:55,395 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938205838] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689938205838,5,FailOnTimeoutGroup] 2023-07-21 11:16:55,395 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938205838] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689938205838,5,FailOnTimeoutGroup] 2023-07-21 11:16:55,395 DEBUG [M:0;jenkins-hbase17:38633] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 11:16:55,395 INFO [M:0;jenkins-hbase17:38633] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 11:16:55,395 INFO [M:0;jenkins-hbase17:38633] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 11:16:55,395 INFO [M:0;jenkins-hbase17:38633] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-07-21 11:16:55,395 DEBUG [M:0;jenkins-hbase17:38633] master.HMaster(1512): Stopping service threads 2023-07-21 11:16:55,395 INFO [M:0;jenkins-hbase17:38633] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 11:16:55,396 ERROR [M:0;jenkins-hbase17:38633] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 11:16:55,396 INFO [M:0;jenkins-hbase17:38633] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 11:16:55,396 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 11:16:55,397 DEBUG [M:0;jenkins-hbase17:38633] zookeeper.ZKUtil(398): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 11:16:55,397 WARN [M:0;jenkins-hbase17:38633] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 11:16:55,397 INFO [M:0;jenkins-hbase17:38633] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 11:16:55,397 INFO [M:0;jenkins-hbase17:38633] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 11:16:55,398 DEBUG [M:0;jenkins-hbase17:38633] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 11:16:55,398 INFO [M:0;jenkins-hbase17:38633] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:55,398 DEBUG [M:0;jenkins-hbase17:38633] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:55,398 DEBUG [M:0;jenkins-hbase17:38633] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 11:16:55,398 DEBUG [M:0;jenkins-hbase17:38633] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:55,398 INFO [M:0;jenkins-hbase17:38633] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=74.03 KB heapSize=90.96 KB 2023-07-21 11:16:55,429 INFO [M:0;jenkins-hbase17:38633] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=74.03 KB at sequenceid=1179 (bloomFilter=true), to=hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c6faec0a908b48568300b1a9fb9832cb 2023-07-21 11:16:55,437 DEBUG [M:0;jenkins-hbase17:38633] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c6faec0a908b48568300b1a9fb9832cb as hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c6faec0a908b48568300b1a9fb9832cb 2023-07-21 11:16:55,442 INFO [M:0;jenkins-hbase17:38633] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36511/user/jenkins/test-data/4a48b7e6-8cd5-7eae-b4f4-d778ac50edae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c6faec0a908b48568300b1a9fb9832cb, entries=24, sequenceid=1179, filesize=8.3 K 2023-07-21 11:16:55,443 INFO [M:0;jenkins-hbase17:38633] regionserver.HRegion(2948): Finished flush of dataSize ~74.03 KB/75808, heapSize ~90.95 KB/93128, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 45ms, sequenceid=1179, compaction requested=true 2023-07-21 11:16:55,446 INFO [M:0;jenkins-hbase17:38633] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 11:16:55,446 DEBUG [M:0;jenkins-hbase17:38633] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 11:16:55,451 INFO [M:0;jenkins-hbase17:38633] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 11:16:55,451 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 11:16:55,451 INFO [M:0;jenkins-hbase17:38633] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:38633 2023-07-21 11:16:55,452 DEBUG [M:0;jenkins-hbase17:38633] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,38633,1689938204808 already deleted, retry=false 2023-07-21 11:16:55,516 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:55,516 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38565-0x101879756880028, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:55,516 INFO [RS:3;jenkins-hbase17:38565] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,38565,1689938211542; zookeeper connection closed. 2023-07-21 11:16:55,520 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@31461143] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@31461143 2023-07-21 11:16:55,616 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:55,616 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:35473-0x10187975688001f, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:55,616 INFO [RS:2;jenkins-hbase17:35473] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,35473,1689938205409; zookeeper connection closed. 2023-07-21 11:16:55,617 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@461a037f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@461a037f 2023-07-21 11:16:55,716 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:55,716 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:38965-0x10187975688002a, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:55,716 INFO [RS:4;jenkins-hbase17:38965] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,38965,1689938214730; zookeeper connection closed. 2023-07-21 11:16:55,716 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@e6d64f4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@e6d64f4 2023-07-21 11:16:55,816 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:55,816 INFO [RS:1;jenkins-hbase17:34931] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,34931,1689938205269; zookeeper connection closed. 2023-07-21 11:16:55,816 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): regionserver:34931-0x10187975688001e, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:55,817 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@41aeb5e6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@41aeb5e6 2023-07-21 11:16:55,817 INFO [Listener at localhost.localdomain/33557] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-21 11:16:56,017 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:56,017 INFO [M:0;jenkins-hbase17:38633] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,38633,1689938204808; zookeeper connection closed. 2023-07-21 11:16:56,017 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): master:38633-0x10187975688001c, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 11:16:56,018 WARN [Listener at localhost.localdomain/33557] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 11:16:56,025 INFO [Listener at localhost.localdomain/33557] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 11:16:56,134 WARN [BP-1138614856-136.243.18.41-1689938153171 heartbeating to localhost.localdomain/127.0.0.1:36511] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 11:16:56,134 WARN [BP-1138614856-136.243.18.41-1689938153171 heartbeating to localhost.localdomain/127.0.0.1:36511] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1138614856-136.243.18.41-1689938153171 (Datanode Uuid 7c1fb44b-3290-4700-b701-b83031f3b3d9) service to localhost.localdomain/127.0.0.1:36511 2023-07-21 11:16:56,136 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/dfs/data/data5/current/BP-1138614856-136.243.18.41-1689938153171] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:16:56,137 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/dfs/data/data6/current/BP-1138614856-136.243.18.41-1689938153171] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:16:56,171 DEBUG [Finalizer] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1121a5df to 127.0.0.1:61077 2023-07-21 11:16:56,171 DEBUG [Finalizer] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 11:16:56,179 WARN [Listener at localhost.localdomain/33557] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 11:16:56,222 INFO [Listener at localhost.localdomain/33557] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 11:16:56,326 WARN [BP-1138614856-136.243.18.41-1689938153171 heartbeating to localhost.localdomain/127.0.0.1:36511] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 11:16:56,326 WARN [BP-1138614856-136.243.18.41-1689938153171 heartbeating to localhost.localdomain/127.0.0.1:36511] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1138614856-136.243.18.41-1689938153171 (Datanode Uuid 4e13056a-3c02-4d90-a700-907346e45ae0) service to localhost.localdomain/127.0.0.1:36511 2023-07-21 11:16:56,327 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/dfs/data/data3/current/BP-1138614856-136.243.18.41-1689938153171] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:16:56,328 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/dfs/data/data4/current/BP-1138614856-136.243.18.41-1689938153171] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:16:56,329 WARN [Listener at localhost.localdomain/33557] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 11:16:56,336 INFO [Listener at localhost.localdomain/33557] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 11:16:56,344 WARN [BP-1138614856-136.243.18.41-1689938153171 heartbeating to localhost.localdomain/127.0.0.1:36511] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 11:16:56,344 WARN [BP-1138614856-136.243.18.41-1689938153171 heartbeating to localhost.localdomain/127.0.0.1:36511] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1138614856-136.243.18.41-1689938153171 (Datanode Uuid 359ae0fa-be87-41cd-9a97-293b91cb17e2) service to localhost.localdomain/127.0.0.1:36511 2023-07-21 11:16:56,345 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/dfs/data/data1/current/BP-1138614856-136.243.18.41-1689938153171] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:16:56,346 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d6159ed-a229-feea-2bc0-c731521dc9e7/cluster_29417768-610a-73d1-3478-d09434f7cb09/dfs/data/data2/current/BP-1138614856-136.243.18.41-1689938153171] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 11:16:56,390 INFO [Listener at localhost.localdomain/33557] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-21 11:16:56,421 INFO [Listener at localhost.localdomain/33557] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 11:16:56,518 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10187975688001b, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 11:16:56,518 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10187975688001b, quorum=127.0.0.1:61077, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 11:16:56,518 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101879756880027, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 11:16:56,519 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101879756880027, quorum=127.0.0.1:61077, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 11:16:56,519 INFO [Listener at localhost.localdomain/33557] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 11:16:56,520 DEBUG [Listener at localhost.localdomain/33557-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10187975688000a, quorum=127.0.0.1:61077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 11:16:56,520 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10187975688000a, quorum=127.0.0.1:61077, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring