2023-07-23 21:10:35,004 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8 2023-07-23 21:10:35,024 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics timeout: 13 mins 2023-07-23 21:10:35,042 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-23 21:10:35,042 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392, deleteOnExit=true 2023-07-23 21:10:35,043 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-23 21:10:35,043 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/test.cache.data in system properties and HBase conf 2023-07-23 21:10:35,044 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.tmp.dir in system properties and HBase conf 2023-07-23 21:10:35,044 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir in system properties and HBase conf 2023-07-23 21:10:35,045 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-23 21:10:35,046 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-23 21:10:35,046 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-23 21:10:35,180 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-23 21:10:35,626 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-23 21:10:35,631 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-23 21:10:35,632 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-23 21:10:35,632 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-23 21:10:35,632 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 21:10:35,633 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-23 21:10:35,633 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-23 21:10:35,633 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 21:10:35,634 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 21:10:35,634 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-23 21:10:35,635 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/nfs.dump.dir in system properties and HBase conf 2023-07-23 21:10:35,635 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir in system properties and HBase conf 2023-07-23 21:10:35,635 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 21:10:35,636 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-23 21:10:35,636 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-23 21:10:36,162 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 21:10:36,167 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 21:10:36,470 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-23 21:10:36,654 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-23 21:10:36,670 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:36,729 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:36,779 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/Jetty_localhost_36299_hdfs____.bdi953/webapp 2023-07-23 21:10:36,963 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36299 2023-07-23 21:10:36,973 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 21:10:36,973 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 21:10:37,449 WARN [Listener at localhost/32841] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:37,545 WARN [Listener at localhost/32841] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 21:10:37,570 WARN [Listener at localhost/32841] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:37,578 INFO [Listener at localhost/32841] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:37,608 INFO [Listener at localhost/32841] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/Jetty_localhost_40749_datanode____.vifzmh/webapp 2023-07-23 21:10:37,755 INFO [Listener at localhost/32841] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40749 2023-07-23 21:10:38,235 WARN [Listener at localhost/41181] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:38,252 WARN [Listener at localhost/41181] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 21:10:38,256 WARN [Listener at localhost/41181] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:38,259 INFO [Listener at localhost/41181] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:38,267 INFO [Listener at localhost/41181] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/Jetty_localhost_39641_datanode____q2cewk/webapp 2023-07-23 21:10:38,379 INFO [Listener at localhost/41181] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39641 2023-07-23 21:10:38,394 WARN [Listener at localhost/41995] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:38,423 WARN [Listener at localhost/41995] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 21:10:38,427 WARN [Listener at localhost/41995] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:38,429 INFO [Listener at localhost/41995] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:38,436 INFO [Listener at localhost/41995] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/Jetty_localhost_33827_datanode____.rv4k8s/webapp 2023-07-23 21:10:38,595 INFO [Listener at localhost/41995] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33827 2023-07-23 21:10:38,639 WARN [Listener at localhost/38995] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:38,850 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaa25c840b97ae48d: Processing first storage report for DS-6c151b7c-b95d-426a-9e2b-4f02874248ad from datanode 6cbb2b85-d5f1-47fa-94cc-60f17130e30b 2023-07-23 21:10:38,852 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaa25c840b97ae48d: from storage DS-6c151b7c-b95d-426a-9e2b-4f02874248ad node DatanodeRegistration(127.0.0.1:39165, datanodeUuid=6cbb2b85-d5f1-47fa-94cc-60f17130e30b, infoPort=38921, infoSecurePort=0, ipcPort=41995, storageInfo=lv=-57;cid=testClusterID;nsid=2136292987;c=1690146636244), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-23 21:10:38,852 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xad382c24e3f0ea9e: Processing first storage report for DS-9cccb944-77e5-4b0a-929d-b38957409f93 from datanode a1b981de-5b65-4d22-9fec-4f78943f74e4 2023-07-23 21:10:38,852 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xad382c24e3f0ea9e: from storage DS-9cccb944-77e5-4b0a-929d-b38957409f93 node DatanodeRegistration(127.0.0.1:39257, datanodeUuid=a1b981de-5b65-4d22-9fec-4f78943f74e4, infoPort=39073, infoSecurePort=0, ipcPort=38995, storageInfo=lv=-57;cid=testClusterID;nsid=2136292987;c=1690146636244), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-23 21:10:38,853 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x890753cedb408b3b: Processing first storage report for DS-aa8d0171-132d-4da2-b07d-13febd9cf809 from datanode 4beab255-6c9c-4249-939b-72fa2a908107 2023-07-23 21:10:38,853 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x890753cedb408b3b: from storage DS-aa8d0171-132d-4da2-b07d-13febd9cf809 node DatanodeRegistration(127.0.0.1:33589, datanodeUuid=4beab255-6c9c-4249-939b-72fa2a908107, infoPort=40543, infoSecurePort=0, ipcPort=41181, storageInfo=lv=-57;cid=testClusterID;nsid=2136292987;c=1690146636244), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:38,853 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaa25c840b97ae48d: Processing first storage report for DS-0a7db3d3-cfc0-4328-85be-6c5a6f244fb5 from datanode 6cbb2b85-d5f1-47fa-94cc-60f17130e30b 2023-07-23 21:10:38,853 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaa25c840b97ae48d: from storage DS-0a7db3d3-cfc0-4328-85be-6c5a6f244fb5 node DatanodeRegistration(127.0.0.1:39165, datanodeUuid=6cbb2b85-d5f1-47fa-94cc-60f17130e30b, infoPort=38921, infoSecurePort=0, ipcPort=41995, storageInfo=lv=-57;cid=testClusterID;nsid=2136292987;c=1690146636244), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:38,853 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xad382c24e3f0ea9e: Processing first storage report for DS-f0650994-d612-42ba-8a38-7191c831714d from datanode a1b981de-5b65-4d22-9fec-4f78943f74e4 2023-07-23 21:10:38,854 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xad382c24e3f0ea9e: from storage DS-f0650994-d612-42ba-8a38-7191c831714d node DatanodeRegistration(127.0.0.1:39257, datanodeUuid=a1b981de-5b65-4d22-9fec-4f78943f74e4, infoPort=39073, infoSecurePort=0, ipcPort=38995, storageInfo=lv=-57;cid=testClusterID;nsid=2136292987;c=1690146636244), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-23 21:10:38,854 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x890753cedb408b3b: Processing first storage report for DS-5035350a-fbf0-4c1d-8d3d-a4a62a601187 from datanode 4beab255-6c9c-4249-939b-72fa2a908107 2023-07-23 21:10:38,854 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x890753cedb408b3b: from storage DS-5035350a-fbf0-4c1d-8d3d-a4a62a601187 node DatanodeRegistration(127.0.0.1:33589, datanodeUuid=4beab255-6c9c-4249-939b-72fa2a908107, infoPort=40543, infoSecurePort=0, ipcPort=41181, storageInfo=lv=-57;cid=testClusterID;nsid=2136292987;c=1690146636244), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:39,058 DEBUG [Listener at localhost/38995] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8 2023-07-23 21:10:39,144 INFO [Listener at localhost/38995] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/zookeeper_0, clientPort=59847, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-23 21:10:39,159 INFO [Listener at localhost/38995] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59847 2023-07-23 21:10:39,166 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:39,168 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:39,827 INFO [Listener at localhost/38995] util.FSUtils(471): Created version file at hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 with version=8 2023-07-23 21:10:39,827 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/hbase-staging 2023-07-23 21:10:39,835 DEBUG [Listener at localhost/38995] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-23 21:10:39,835 DEBUG [Listener at localhost/38995] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-23 21:10:39,835 DEBUG [Listener at localhost/38995] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-23 21:10:39,835 DEBUG [Listener at localhost/38995] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-23 21:10:40,173 INFO [Listener at localhost/38995] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-23 21:10:40,674 INFO [Listener at localhost/38995] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:40,710 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:40,711 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:40,711 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:40,712 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:40,712 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:40,861 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:40,938 DEBUG [Listener at localhost/38995] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-23 21:10:41,034 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35573 2023-07-23 21:10:41,045 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:41,047 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:41,068 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35573 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:10:41,109 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:355730x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:41,112 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35573-0x1019405901c0000 connected 2023-07-23 21:10:41,144 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:41,145 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:41,151 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:41,172 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35573 2023-07-23 21:10:41,173 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35573 2023-07-23 21:10:41,174 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35573 2023-07-23 21:10:41,175 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35573 2023-07-23 21:10:41,175 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35573 2023-07-23 21:10:41,208 INFO [Listener at localhost/38995] log.Log(170): Logging initialized @7008ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-23 21:10:41,349 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:41,350 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:41,351 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:41,353 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 21:10:41,353 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:41,353 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:41,357 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:41,421 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 41339 2023-07-23 21:10:41,423 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:41,460 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:41,464 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1bf4d331{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:41,465 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:41,465 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@437bc3bc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:41,636 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:41,648 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:41,648 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:41,650 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:10:41,656 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:41,683 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@15490b41{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-41339-hbase-server-2_4_18-SNAPSHOT_jar-_-any-612247464385587071/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 21:10:41,695 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@4a54f3d5{HTTP/1.1, (http/1.1)}{0.0.0.0:41339} 2023-07-23 21:10:41,696 INFO [Listener at localhost/38995] server.Server(415): Started @7496ms 2023-07-23 21:10:41,699 INFO [Listener at localhost/38995] master.HMaster(444): hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914, hbase.cluster.distributed=false 2023-07-23 21:10:41,775 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:41,775 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:41,776 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:41,776 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:41,776 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:41,776 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:41,783 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:41,787 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42727 2023-07-23 21:10:41,791 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:10:41,800 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:10:41,801 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:41,803 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:41,805 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42727 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:10:41,809 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:427270x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:41,810 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:427270x0, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:41,815 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42727-0x1019405901c0001 connected 2023-07-23 21:10:41,815 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:41,818 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:41,819 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42727 2023-07-23 21:10:41,819 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42727 2023-07-23 21:10:41,823 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42727 2023-07-23 21:10:41,826 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42727 2023-07-23 21:10:41,826 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42727 2023-07-23 21:10:41,830 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:41,830 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:41,831 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:41,832 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:10:41,832 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:41,832 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:41,832 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:41,834 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 42049 2023-07-23 21:10:41,834 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:41,838 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:41,839 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@488a8507{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:41,839 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:41,839 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2e7996f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:41,966 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:41,968 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:41,968 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:41,969 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:10:41,970 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:41,976 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@431a391d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-42049-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7396998868498545091/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:41,977 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@37d22d71{HTTP/1.1, (http/1.1)}{0.0.0.0:42049} 2023-07-23 21:10:41,977 INFO [Listener at localhost/38995] server.Server(415): Started @7777ms 2023-07-23 21:10:41,996 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:41,996 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:41,997 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:41,997 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:41,997 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:41,998 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:41,998 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:42,000 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36963 2023-07-23 21:10:42,000 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:10:42,004 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:10:42,005 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:42,007 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:42,008 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36963 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:10:42,024 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:369630x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:42,026 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:369630x0, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:42,027 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36963-0x1019405901c0002 connected 2023-07-23 21:10:42,028 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:42,029 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:42,034 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36963 2023-07-23 21:10:42,038 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36963 2023-07-23 21:10:42,039 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36963 2023-07-23 21:10:42,039 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36963 2023-07-23 21:10:42,040 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36963 2023-07-23 21:10:42,042 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:42,043 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:42,043 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:42,044 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:10:42,044 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:42,044 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:42,044 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:42,045 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 36955 2023-07-23 21:10:42,045 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:42,056 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:42,056 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@143aeb70{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:42,057 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:42,057 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5dda6698{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:42,189 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:42,190 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:42,190 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:42,191 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:10:42,192 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:42,193 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7c3ddaff{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-36955-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2158787615325522177/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:42,194 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@4e28ad7f{HTTP/1.1, (http/1.1)}{0.0.0.0:36955} 2023-07-23 21:10:42,194 INFO [Listener at localhost/38995] server.Server(415): Started @7994ms 2023-07-23 21:10:42,212 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:42,212 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:42,212 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:42,212 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:42,213 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:42,213 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:42,213 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:42,215 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46485 2023-07-23 21:10:42,215 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:10:42,216 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:10:42,218 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:42,219 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:42,220 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46485 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:10:42,226 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:464850x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:42,228 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46485-0x1019405901c0003 connected 2023-07-23 21:10:42,228 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:42,229 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:42,230 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:42,230 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46485 2023-07-23 21:10:42,231 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46485 2023-07-23 21:10:42,231 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46485 2023-07-23 21:10:42,232 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46485 2023-07-23 21:10:42,232 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46485 2023-07-23 21:10:42,235 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:42,235 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:42,235 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:42,236 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:10:42,236 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:42,236 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:42,236 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:42,237 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 37087 2023-07-23 21:10:42,237 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:42,239 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:42,239 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5769bd85{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:42,240 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:42,240 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@367a968b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:42,382 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:42,383 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:42,383 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:42,384 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:10:42,385 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:42,386 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@731442c6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-37087-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9143353625339542826/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:42,388 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@f66c3ee{HTTP/1.1, (http/1.1)}{0.0.0.0:37087} 2023-07-23 21:10:42,388 INFO [Listener at localhost/38995] server.Server(415): Started @8188ms 2023-07-23 21:10:42,395 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:42,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@6037568c{HTTP/1.1, (http/1.1)}{0.0.0.0:38923} 2023-07-23 21:10:42,400 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8200ms 2023-07-23 21:10:42,400 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35573,1690146639994 2023-07-23 21:10:42,413 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 21:10:42,418 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35573,1690146639994 2023-07-23 21:10:42,443 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:42,443 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:42,443 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:42,443 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:42,445 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:42,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:10:42,449 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:10:42,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35573,1690146639994 from backup master directory 2023-07-23 21:10:42,453 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35573,1690146639994 2023-07-23 21:10:42,453 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 21:10:42,454 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:42,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35573,1690146639994 2023-07-23 21:10:42,458 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-23 21:10:42,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-23 21:10:42,557 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/hbase.id with ID: af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:10:42,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:42,611 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:42,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0a94471e to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:42,706 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65f5b1ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:42,732 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:42,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 21:10:42,754 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-23 21:10:42,754 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-23 21:10:42,756 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:10:42,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:10:42,762 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:42,803 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store-tmp 2023-07-23 21:10:42,849 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:42,849 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 21:10:42,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:42,849 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:42,849 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 21:10:42,849 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:42,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:42,849 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:10:42,851 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,35573,1690146639994 2023-07-23 21:10:42,872 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35573%2C1690146639994, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,35573,1690146639994, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/oldWALs, maxLogs=10 2023-07-23 21:10:42,925 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:10:42,925 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:10:42,925 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:10:42,933 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:10:43,000 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,35573,1690146639994/jenkins-hbase4.apache.org%2C35573%2C1690146639994.1690146642882 2023-07-23 21:10:43,000 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK]] 2023-07-23 21:10:43,001 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:43,001 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:43,006 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:43,008 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:43,071 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:43,078 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 21:10:43,108 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 21:10:43,120 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:43,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:43,128 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:43,147 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:43,152 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:43,154 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10070951200, jitterRate=-0.06206957995891571}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:43,154 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:10:43,156 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 21:10:43,187 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 21:10:43,187 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 21:10:43,191 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 21:10:43,193 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-23 21:10:43,235 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 41 msec 2023-07-23 21:10:43,235 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 21:10:43,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-23 21:10:43,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-23 21:10:43,273 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-23 21:10:43,279 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 21:10:43,283 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 21:10:43,286 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:43,287 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 21:10:43,287 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 21:10:43,301 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 21:10:43,306 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:43,306 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:43,306 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:43,306 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:43,306 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:43,307 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35573,1690146639994, sessionid=0x1019405901c0000, setting cluster-up flag (Was=false) 2023-07-23 21:10:43,335 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:43,341 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 21:10:43,343 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35573,1690146639994 2023-07-23 21:10:43,350 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:43,355 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 21:10:43,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35573,1690146639994 2023-07-23 21:10:43,359 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.hbase-snapshot/.tmp 2023-07-23 21:10:43,392 INFO [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:10:43,393 INFO [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:10:43,393 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:10:43,400 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:10:43,400 DEBUG [RS:2;jenkins-hbase4:46485] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:10:43,400 DEBUG [RS:1;jenkins-hbase4:36963] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:10:43,407 DEBUG [RS:1;jenkins-hbase4:36963] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:10:43,407 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:10:43,407 DEBUG [RS:2;jenkins-hbase4:46485] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:10:43,407 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:10:43,407 DEBUG [RS:1;jenkins-hbase4:36963] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:10:43,407 DEBUG [RS:2;jenkins-hbase4:46485] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:10:43,413 DEBUG [RS:1;jenkins-hbase4:36963] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:10:43,413 DEBUG [RS:2;jenkins-hbase4:46485] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:10:43,413 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:10:43,415 DEBUG [RS:1;jenkins-hbase4:36963] zookeeper.ReadOnlyZKClient(139): Connect 0x1b7d15cf to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:43,416 DEBUG [RS:2;jenkins-hbase4:46485] zookeeper.ReadOnlyZKClient(139): Connect 0x10bde4a4 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:43,416 DEBUG [RS:0;jenkins-hbase4:42727] zookeeper.ReadOnlyZKClient(139): Connect 0x255efbb8 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:43,425 DEBUG [RS:2;jenkins-hbase4:46485] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55bf78d7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:43,425 DEBUG [RS:1;jenkins-hbase4:36963] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a4b11da, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:43,426 DEBUG [RS:2;jenkins-hbase4:46485] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@63b2769b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:43,426 DEBUG [RS:1;jenkins-hbase4:36963] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4b0501dc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:43,426 DEBUG [RS:0;jenkins-hbase4:42727] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@797d4db6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:43,426 DEBUG [RS:0;jenkins-hbase4:42727] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@467870d9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:43,450 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-23 21:10:43,455 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:42727 2023-07-23 21:10:43,456 DEBUG [RS:2;jenkins-hbase4:46485] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:46485 2023-07-23 21:10:43,457 DEBUG [RS:1;jenkins-hbase4:36963] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:36963 2023-07-23 21:10:43,461 INFO [RS:0;jenkins-hbase4:42727] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:10:43,462 INFO [RS:1;jenkins-hbase4:36963] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:10:43,462 INFO [RS:0;jenkins-hbase4:42727] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:10:43,462 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:10:43,461 INFO [RS:2;jenkins-hbase4:46485] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:10:43,463 INFO [RS:2;jenkins-hbase4:46485] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:10:43,463 DEBUG [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:10:43,462 INFO [RS:1;jenkins-hbase4:36963] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:10:43,463 DEBUG [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:10:43,465 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-23 21:10:43,466 INFO [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35573,1690146639994 with isa=jenkins-hbase4.apache.org/172.31.14.131:46485, startcode=1690146642211 2023-07-23 21:10:43,466 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35573,1690146639994 with isa=jenkins-hbase4.apache.org/172.31.14.131:42727, startcode=1690146641774 2023-07-23 21:10:43,466 INFO [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35573,1690146639994 with isa=jenkins-hbase4.apache.org/172.31.14.131:36963, startcode=1690146641995 2023-07-23 21:10:43,467 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:43,470 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-23 21:10:43,470 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-23 21:10:43,488 DEBUG [RS:0;jenkins-hbase4:42727] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:10:43,488 DEBUG [RS:2;jenkins-hbase4:46485] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:10:43,488 DEBUG [RS:1;jenkins-hbase4:36963] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:10:43,554 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59529, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:10:43,554 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58173, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:10:43,554 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33631, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:10:43,564 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:43,573 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:43,574 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:43,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-23 21:10:43,595 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 21:10:43,595 DEBUG [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 21:10:43,595 DEBUG [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 21:10:43,595 WARN [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 21:10:43,595 WARN [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 21:10:43,596 WARN [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 21:10:43,622 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 21:10:43,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 21:10:43,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 21:10:43,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 21:10:43,630 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:10:43,630 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:10:43,630 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:10:43,630 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:10:43,630 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 21:10:43,630 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,630 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:43,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,632 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690146673632 2023-07-23 21:10:43,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 21:10:43,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 21:10:43,641 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 21:10:43,642 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-23 21:10:43,645 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:43,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 21:10:43,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 21:10:43,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 21:10:43,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 21:10:43,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,652 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 21:10:43,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 21:10:43,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 21:10:43,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 21:10:43,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 21:10:43,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146643661,5,FailOnTimeoutGroup] 2023-07-23 21:10:43,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146643661,5,FailOnTimeoutGroup] 2023-07-23 21:10:43,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 21:10:43,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,697 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35573,1690146639994 with isa=jenkins-hbase4.apache.org/172.31.14.131:42727, startcode=1690146641774 2023-07-23 21:10:43,697 INFO [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35573,1690146639994 with isa=jenkins-hbase4.apache.org/172.31.14.131:46485, startcode=1690146642211 2023-07-23 21:10:43,697 INFO [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35573,1690146639994 with isa=jenkins-hbase4.apache.org/172.31.14.131:36963, startcode=1690146641995 2023-07-23 21:10:43,705 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35573] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:43,707 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:43,712 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35573] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:43,713 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35573] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:43,719 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 21:10:43,720 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:43,720 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 21:10:43,722 DEBUG [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:10:43,722 DEBUG [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:10:43,722 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:10:43,723 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:10:43,723 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41339 2023-07-23 21:10:43,722 DEBUG [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:10:43,722 DEBUG [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:10:43,723 DEBUG [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41339 2023-07-23 21:10:43,723 DEBUG [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41339 2023-07-23 21:10:43,736 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:43,742 DEBUG [RS:2;jenkins-hbase4:46485] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:43,742 WARN [RS:2;jenkins-hbase4:46485] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:43,742 DEBUG [RS:0;jenkins-hbase4:42727] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:43,742 INFO [RS:2;jenkins-hbase4:46485] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:43,742 WARN [RS:0;jenkins-hbase4:42727] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:43,743 DEBUG [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:43,743 INFO [RS:0;jenkins-hbase4:42727] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:43,744 DEBUG [RS:1;jenkins-hbase4:36963] zookeeper.ZKUtil(162): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:43,744 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:43,743 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36963,1690146641995] 2023-07-23 21:10:43,744 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46485,1690146642211] 2023-07-23 21:10:43,744 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42727,1690146641774] 2023-07-23 21:10:43,744 WARN [RS:1;jenkins-hbase4:36963] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:43,744 INFO [RS:1;jenkins-hbase4:36963] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:43,745 DEBUG [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:43,751 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:43,753 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:43,753 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:10:43,760 DEBUG [RS:2;jenkins-hbase4:46485] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:43,760 DEBUG [RS:1;jenkins-hbase4:36963] zookeeper.ZKUtil(162): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:43,761 DEBUG [RS:0;jenkins-hbase4:42727] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:43,761 DEBUG [RS:2;jenkins-hbase4:46485] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:43,761 DEBUG [RS:1;jenkins-hbase4:36963] zookeeper.ZKUtil(162): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:43,762 DEBUG [RS:0;jenkins-hbase4:42727] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:43,762 DEBUG [RS:2;jenkins-hbase4:46485] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:43,762 DEBUG [RS:1;jenkins-hbase4:36963] zookeeper.ZKUtil(162): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:43,762 DEBUG [RS:0;jenkins-hbase4:42727] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:43,788 DEBUG [RS:2;jenkins-hbase4:46485] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:10:43,788 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:10:43,788 DEBUG [RS:1;jenkins-hbase4:36963] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:10:43,790 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:43,793 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 21:10:43,797 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info 2023-07-23 21:10:43,797 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 21:10:43,798 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:43,799 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 21:10:43,802 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:10:43,803 INFO [RS:2;jenkins-hbase4:46485] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:10:43,803 INFO [RS:0;jenkins-hbase4:42727] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:10:43,803 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 21:10:43,804 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:43,805 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 21:10:43,808 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table 2023-07-23 21:10:43,808 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 21:10:43,809 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:43,811 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740 2023-07-23 21:10:43,812 INFO [RS:1;jenkins-hbase4:36963] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:10:43,812 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740 2023-07-23 21:10:43,818 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 21:10:43,822 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 21:10:43,826 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:43,827 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10180680800, jitterRate=-0.05185021460056305}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 21:10:43,827 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 21:10:43,828 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 21:10:43,828 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 21:10:43,828 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 21:10:43,828 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 21:10:43,828 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 21:10:43,829 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 21:10:43,829 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 21:10:43,837 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 21:10:43,837 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-23 21:10:43,838 INFO [RS:2;jenkins-hbase4:46485] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:10:43,838 INFO [RS:1;jenkins-hbase4:36963] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:10:43,839 INFO [RS:0;jenkins-hbase4:42727] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:10:43,851 INFO [RS:1;jenkins-hbase4:36963] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:10:43,852 INFO [RS:0;jenkins-hbase4:42727] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:10:43,852 INFO [RS:1;jenkins-hbase4:36963] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,851 INFO [RS:2;jenkins-hbase4:46485] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:10:43,853 INFO [RS:2;jenkins-hbase4:46485] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,853 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,855 INFO [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:10:43,858 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:10:43,859 INFO [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:10:43,862 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 21:10:43,868 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,868 INFO [RS:1;jenkins-hbase4:36963] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,868 INFO [RS:2;jenkins-hbase4:46485] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,868 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,869 DEBUG [RS:1;jenkins-hbase4:36963] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,869 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,869 DEBUG [RS:1;jenkins-hbase4:36963] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,869 DEBUG [RS:2;jenkins-hbase4:46485] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,869 DEBUG [RS:1;jenkins-hbase4:36963] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,869 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,869 DEBUG [RS:1;jenkins-hbase4:36963] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,869 DEBUG [RS:2;jenkins-hbase4:46485] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,870 DEBUG [RS:1;jenkins-hbase4:36963] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,870 DEBUG [RS:2;jenkins-hbase4:46485] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,870 DEBUG [RS:1;jenkins-hbase4:36963] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:43,870 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,870 DEBUG [RS:1;jenkins-hbase4:36963] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,870 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,870 DEBUG [RS:2;jenkins-hbase4:46485] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,870 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:43,870 DEBUG [RS:2;jenkins-hbase4:46485] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,870 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,870 DEBUG [RS:1;jenkins-hbase4:36963] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,870 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,870 DEBUG [RS:2;jenkins-hbase4:46485] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:43,870 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,871 DEBUG [RS:2;jenkins-hbase4:46485] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,871 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,871 DEBUG [RS:2;jenkins-hbase4:46485] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,870 DEBUG [RS:1;jenkins-hbase4:36963] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,871 DEBUG [RS:2;jenkins-hbase4:46485] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,871 DEBUG [RS:1;jenkins-hbase4:36963] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,871 DEBUG [RS:2;jenkins-hbase4:46485] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:43,872 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,872 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,873 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,877 INFO [RS:2;jenkins-hbase4:46485] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,877 INFO [RS:2;jenkins-hbase4:46485] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,877 INFO [RS:1;jenkins-hbase4:36963] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,877 INFO [RS:2;jenkins-hbase4:46485] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,877 INFO [RS:1;jenkins-hbase4:36963] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,878 INFO [RS:1;jenkins-hbase4:36963] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,888 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 21:10:43,891 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-23 21:10:43,897 INFO [RS:0;jenkins-hbase4:42727] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:10:43,897 INFO [RS:1;jenkins-hbase4:36963] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:10:43,897 INFO [RS:2;jenkins-hbase4:46485] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:10:43,900 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42727,1690146641774-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,900 INFO [RS:1;jenkins-hbase4:36963] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36963,1690146641995-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,900 INFO [RS:2;jenkins-hbase4:46485] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46485,1690146642211-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:43,918 INFO [RS:2;jenkins-hbase4:46485] regionserver.Replication(203): jenkins-hbase4.apache.org,46485,1690146642211 started 2023-07-23 21:10:43,918 INFO [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46485,1690146642211, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46485, sessionid=0x1019405901c0003 2023-07-23 21:10:43,918 DEBUG [RS:2;jenkins-hbase4:46485] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:10:43,918 DEBUG [RS:2;jenkins-hbase4:46485] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:43,918 DEBUG [RS:2;jenkins-hbase4:46485] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46485,1690146642211' 2023-07-23 21:10:43,918 DEBUG [RS:2;jenkins-hbase4:46485] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:10:43,919 DEBUG [RS:2;jenkins-hbase4:46485] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:10:43,920 DEBUG [RS:2;jenkins-hbase4:46485] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:10:43,920 DEBUG [RS:2;jenkins-hbase4:46485] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:10:43,920 DEBUG [RS:2;jenkins-hbase4:46485] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:43,920 DEBUG [RS:2;jenkins-hbase4:46485] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46485,1690146642211' 2023-07-23 21:10:43,920 DEBUG [RS:2;jenkins-hbase4:46485] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:43,920 DEBUG [RS:2;jenkins-hbase4:46485] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:43,921 INFO [RS:0;jenkins-hbase4:42727] regionserver.Replication(203): jenkins-hbase4.apache.org,42727,1690146641774 started 2023-07-23 21:10:43,921 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42727,1690146641774, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42727, sessionid=0x1019405901c0001 2023-07-23 21:10:43,921 INFO [RS:1;jenkins-hbase4:36963] regionserver.Replication(203): jenkins-hbase4.apache.org,36963,1690146641995 started 2023-07-23 21:10:43,921 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:10:43,921 INFO [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36963,1690146641995, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36963, sessionid=0x1019405901c0002 2023-07-23 21:10:43,921 DEBUG [RS:2;jenkins-hbase4:46485] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:10:43,921 DEBUG [RS:0;jenkins-hbase4:42727] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:43,922 INFO [RS:2;jenkins-hbase4:46485] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:10:43,922 DEBUG [RS:0;jenkins-hbase4:42727] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42727,1690146641774' 2023-07-23 21:10:43,922 DEBUG [RS:0;jenkins-hbase4:42727] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:10:43,922 DEBUG [RS:1;jenkins-hbase4:36963] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:10:43,922 INFO [RS:2;jenkins-hbase4:46485] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:10:43,922 DEBUG [RS:1;jenkins-hbase4:36963] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:43,922 DEBUG [RS:1;jenkins-hbase4:36963] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36963,1690146641995' 2023-07-23 21:10:43,922 DEBUG [RS:1;jenkins-hbase4:36963] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:10:43,923 DEBUG [RS:0;jenkins-hbase4:42727] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:10:43,923 DEBUG [RS:1;jenkins-hbase4:36963] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:10:43,923 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:10:43,923 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:10:43,924 DEBUG [RS:0;jenkins-hbase4:42727] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:43,924 DEBUG [RS:1;jenkins-hbase4:36963] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:10:43,924 DEBUG [RS:1;jenkins-hbase4:36963] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:10:43,924 DEBUG [RS:1;jenkins-hbase4:36963] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:43,924 DEBUG [RS:0;jenkins-hbase4:42727] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42727,1690146641774' 2023-07-23 21:10:43,925 DEBUG [RS:0;jenkins-hbase4:42727] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:43,924 DEBUG [RS:1;jenkins-hbase4:36963] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36963,1690146641995' 2023-07-23 21:10:43,925 DEBUG [RS:1;jenkins-hbase4:36963] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:43,925 DEBUG [RS:0;jenkins-hbase4:42727] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:43,925 DEBUG [RS:1;jenkins-hbase4:36963] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:43,925 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:10:43,926 DEBUG [RS:1;jenkins-hbase4:36963] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:10:43,926 INFO [RS:0;jenkins-hbase4:42727] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:10:43,926 INFO [RS:1;jenkins-hbase4:36963] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:10:43,926 INFO [RS:1;jenkins-hbase4:36963] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:10:43,926 INFO [RS:0;jenkins-hbase4:42727] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:10:44,035 INFO [RS:1;jenkins-hbase4:36963] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36963%2C1690146641995, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36963,1690146641995, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:10:44,035 INFO [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42727%2C1690146641774, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42727,1690146641774, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:10:44,035 INFO [RS:2;jenkins-hbase4:46485] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46485%2C1690146642211, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,46485,1690146642211, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:10:44,043 DEBUG [jenkins-hbase4:35573] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 21:10:44,067 DEBUG [jenkins-hbase4:35573] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:44,067 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:10:44,067 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:10:44,067 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:10:44,074 DEBUG [jenkins-hbase4:35573] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:44,074 DEBUG [jenkins-hbase4:35573] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:44,074 DEBUG [jenkins-hbase4:35573] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:44,075 DEBUG [jenkins-hbase4:35573] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:44,075 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:10:44,076 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:10:44,079 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:10:44,080 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:10:44,080 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:10:44,080 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:10:44,081 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42727,1690146641774, state=OPENING 2023-07-23 21:10:44,120 INFO [RS:2;jenkins-hbase4:46485] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,46485,1690146642211/jenkins-hbase4.apache.org%2C46485%2C1690146642211.1690146644042 2023-07-23 21:10:44,121 INFO [RS:1;jenkins-hbase4:36963] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36963,1690146641995/jenkins-hbase4.apache.org%2C36963%2C1690146641995.1690146644042 2023-07-23 21:10:44,125 DEBUG [RS:2;jenkins-hbase4:46485] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK]] 2023-07-23 21:10:44,127 DEBUG [RS:1;jenkins-hbase4:36963] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK], DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK]] 2023-07-23 21:10:44,127 INFO [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42727,1690146641774/jenkins-hbase4.apache.org%2C42727%2C1690146641774.1690146644042 2023-07-23 21:10:44,128 DEBUG [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK]] 2023-07-23 21:10:44,131 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-23 21:10:44,132 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:44,133 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:10:44,136 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42727,1690146641774}] 2023-07-23 21:10:44,227 WARN [ReadOnlyZKClient-127.0.0.1:59847@0x0a94471e] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-23 21:10:44,252 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35573,1690146639994] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:44,257 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45630, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:44,258 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42727] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:45630 deadline: 1690146704258, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:44,317 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:44,321 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:44,328 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45638, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:44,344 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 21:10:44,344 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:44,348 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42727%2C1690146641774.meta, suffix=.meta, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42727,1690146641774, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:10:44,371 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:10:44,375 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:10:44,377 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:10:44,385 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42727,1690146641774/jenkins-hbase4.apache.org%2C42727%2C1690146641774.meta.1690146644349.meta 2023-07-23 21:10:44,386 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK]] 2023-07-23 21:10:44,386 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:44,388 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:10:44,391 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 21:10:44,393 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 21:10:44,399 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 21:10:44,399 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:44,399 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 21:10:44,399 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 21:10:44,402 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 21:10:44,404 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info 2023-07-23 21:10:44,404 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info 2023-07-23 21:10:44,405 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 21:10:44,406 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:44,406 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 21:10:44,408 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:10:44,408 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:10:44,409 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 21:10:44,410 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:44,410 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 21:10:44,412 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table 2023-07-23 21:10:44,412 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table 2023-07-23 21:10:44,413 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 21:10:44,415 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:44,418 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740 2023-07-23 21:10:44,430 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740 2023-07-23 21:10:44,434 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 21:10:44,437 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 21:10:44,439 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9545807040, jitterRate=-0.11097744107246399}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 21:10:44,439 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 21:10:44,451 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690146644314 2023-07-23 21:10:44,480 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 21:10:44,481 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 21:10:44,481 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42727,1690146641774, state=OPEN 2023-07-23 21:10:44,484 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 21:10:44,484 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:10:44,490 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-23 21:10:44,490 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42727,1690146641774 in 348 msec 2023-07-23 21:10:44,496 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-23 21:10:44,496 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 629 msec 2023-07-23 21:10:44,502 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0220 sec 2023-07-23 21:10:44,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690146644502, completionTime=-1 2023-07-23 21:10:44,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-23 21:10:44,502 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 21:10:44,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 21:10:44,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690146704563 2023-07-23 21:10:44,564 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690146764564 2023-07-23 21:10:44,564 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 61 msec 2023-07-23 21:10:44,587 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35573,1690146639994-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:44,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35573,1690146639994-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:44,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35573,1690146639994-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:44,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35573, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:44,591 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:44,598 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-23 21:10:44,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-23 21:10:44,615 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:44,628 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-23 21:10:44,632 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:44,636 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:44,656 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:10:44,660 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba empty. 2023-07-23 21:10:44,661 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:10:44,661 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-23 21:10:44,720 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:44,723 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => cfdae6c1dde0d9be1f26f623634660ba, NAME => 'hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:44,745 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:44,745 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing cfdae6c1dde0d9be1f26f623634660ba, disabling compactions & flushes 2023-07-23 21:10:44,745 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:10:44,745 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:10:44,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. after waiting 0 ms 2023-07-23 21:10:44,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:10:44,746 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:10:44,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for cfdae6c1dde0d9be1f26f623634660ba: 2023-07-23 21:10:44,751 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:44,770 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146644754"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146644754"}]},"ts":"1690146644754"} 2023-07-23 21:10:44,773 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35573,1690146639994] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:44,776 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35573,1690146639994] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-23 21:10:44,781 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:44,784 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:44,789 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:44,790 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 empty. 2023-07-23 21:10:44,792 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:44,792 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-23 21:10:44,817 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:44,819 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:44,829 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146644819"}]},"ts":"1690146644819"} 2023-07-23 21:10:44,839 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-23 21:10:44,842 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:44,843 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 674d6b4e3c5d6a4f0860e9c874b3e183, NAME => 'hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:44,846 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:44,846 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:44,846 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:44,846 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:44,846 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:44,849 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cfdae6c1dde0d9be1f26f623634660ba, ASSIGN}] 2023-07-23 21:10:44,854 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cfdae6c1dde0d9be1f26f623634660ba, ASSIGN 2023-07-23 21:10:44,858 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=cfdae6c1dde0d9be1f26f623634660ba, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42727,1690146641774; forceNewPlan=false, retain=false 2023-07-23 21:10:44,878 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:44,878 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 674d6b4e3c5d6a4f0860e9c874b3e183, disabling compactions & flushes 2023-07-23 21:10:44,878 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:44,878 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:44,878 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. after waiting 0 ms 2023-07-23 21:10:44,878 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:44,878 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:44,878 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 674d6b4e3c5d6a4f0860e9c874b3e183: 2023-07-23 21:10:44,886 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:44,887 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146644887"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146644887"}]},"ts":"1690146644887"} 2023-07-23 21:10:44,893 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:44,895 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:44,895 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146644895"}]},"ts":"1690146644895"} 2023-07-23 21:10:44,898 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-23 21:10:44,908 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:44,909 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:44,909 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:44,909 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:44,909 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:44,909 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, ASSIGN}] 2023-07-23 21:10:44,913 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, ASSIGN 2023-07-23 21:10:44,915 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36963,1690146641995; forceNewPlan=false, retain=false 2023-07-23 21:10:44,915 INFO [jenkins-hbase4:35573] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-23 21:10:44,917 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=cfdae6c1dde0d9be1f26f623634660ba, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:44,917 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:44,917 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146644917"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146644917"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146644917"}]},"ts":"1690146644917"} 2023-07-23 21:10:44,918 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146644917"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146644917"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146644917"}]},"ts":"1690146644917"} 2023-07-23 21:10:44,923 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure cfdae6c1dde0d9be1f26f623634660ba, server=jenkins-hbase4.apache.org,42727,1690146641774}] 2023-07-23 21:10:44,925 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,36963,1690146641995}] 2023-07-23 21:10:45,079 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:45,080 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:45,083 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49810, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:45,084 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:10:45,085 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cfdae6c1dde0d9be1f26f623634660ba, NAME => 'hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:45,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:10:45,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:45,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:10:45,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:10:45,088 INFO [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:10:45,089 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:45,089 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 674d6b4e3c5d6a4f0860e9c874b3e183, NAME => 'hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:45,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:10:45,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. service=MultiRowMutationService 2023-07-23 21:10:45,090 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 21:10:45,091 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:45,091 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:45,091 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:45,091 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:45,091 DEBUG [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/info 2023-07-23 21:10:45,091 DEBUG [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/info 2023-07-23 21:10:45,092 INFO [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cfdae6c1dde0d9be1f26f623634660ba columnFamilyName info 2023-07-23 21:10:45,093 INFO [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] regionserver.HStore(310): Store=cfdae6c1dde0d9be1f26f623634660ba/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:45,094 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:45,095 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:10:45,096 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:10:45,097 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m 2023-07-23 21:10:45,097 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m 2023-07-23 21:10:45,098 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 674d6b4e3c5d6a4f0860e9c874b3e183 columnFamilyName m 2023-07-23 21:10:45,098 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(310): Store=674d6b4e3c5d6a4f0860e9c874b3e183/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:45,100 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:45,101 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:45,102 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:10:45,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:45,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:45,106 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cfdae6c1dde0d9be1f26f623634660ba; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11506276160, jitterRate=0.07160547375679016}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:45,106 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cfdae6c1dde0d9be1f26f623634660ba: 2023-07-23 21:10:45,109 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba., pid=8, masterSystemTime=1690146645076 2023-07-23 21:10:45,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:45,110 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 674d6b4e3c5d6a4f0860e9c874b3e183; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3e78545a, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:45,111 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 674d6b4e3c5d6a4f0860e9c874b3e183: 2023-07-23 21:10:45,112 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183., pid=9, masterSystemTime=1690146645079 2023-07-23 21:10:45,113 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:10:45,113 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:10:45,116 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:45,116 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:45,116 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=cfdae6c1dde0d9be1f26f623634660ba, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:45,117 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146645115"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146645115"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146645115"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146645115"}]},"ts":"1690146645115"} 2023-07-23 21:10:45,118 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:45,118 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146645118"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146645118"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146645118"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146645118"}]},"ts":"1690146645118"} 2023-07-23 21:10:45,125 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-23 21:10:45,126 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure cfdae6c1dde0d9be1f26f623634660ba, server=jenkins-hbase4.apache.org,42727,1690146641774 in 197 msec 2023-07-23 21:10:45,130 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-23 21:10:45,131 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,36963,1690146641995 in 198 msec 2023-07-23 21:10:45,133 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-23 21:10:45,133 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=cfdae6c1dde0d9be1f26f623634660ba, ASSIGN in 277 msec 2023-07-23 21:10:45,135 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:45,135 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146645135"}]},"ts":"1690146645135"} 2023-07-23 21:10:45,136 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-23 21:10:45,136 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, ASSIGN in 222 msec 2023-07-23 21:10:45,137 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:45,137 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146645137"}]},"ts":"1690146645137"} 2023-07-23 21:10:45,138 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-23 21:10:45,140 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-23 21:10:45,141 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:45,143 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:45,147 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 525 msec 2023-07-23 21:10:45,148 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 370 msec 2023-07-23 21:10:45,203 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35573,1690146639994] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:45,206 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49826, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:45,210 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-23 21:10:45,210 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-23 21:10:45,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-23 21:10:45,235 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:45,235 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:45,254 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-23 21:10:45,274 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:45,280 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 33 msec 2023-07-23 21:10:45,281 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:45,281 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:45,283 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 21:10:45,287 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 21:10:45,290 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-23 21:10:45,300 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:45,307 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 17 msec 2023-07-23 21:10:45,314 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 21:10:45,317 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 21:10:45,317 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.863sec 2023-07-23 21:10:45,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-23 21:10:45,321 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 21:10:45,321 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 21:10:45,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35573,1690146639994-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 21:10:45,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35573,1690146639994-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 21:10:45,330 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(139): Connect 0x4b022e64 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:45,341 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 21:10:45,342 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b39f108, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:45,362 DEBUG [hconnection-0x17f15d9f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:45,374 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45640, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:45,385 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35573,1690146639994 2023-07-23 21:10:45,386 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:45,395 DEBUG [Listener at localhost/38995] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 21:10:45,398 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46040, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 21:10:45,412 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-23 21:10:45,412 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:45,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-23 21:10:45,418 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(139): Connect 0x1654b0f2 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:45,424 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3da3b660, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:45,424 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:10:45,429 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:45,430 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1019405901c000a connected 2023-07-23 21:10:45,461 INFO [Listener at localhost/38995] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=425, OpenFileDescriptor=683, MaxFileDescriptor=60000, SystemLoadAverage=504, ProcessCount=175, AvailableMemoryMB=6087 2023-07-23 21:10:45,464 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(132): testClearNotProcessedDeadServer 2023-07-23 21:10:45,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:45,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:45,538 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-23 21:10:45,550 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:45,551 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:45,551 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:45,551 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:45,551 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:45,551 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:45,551 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:45,556 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45637 2023-07-23 21:10:45,556 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:10:45,558 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:10:45,559 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:45,562 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:45,565 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45637 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:10:45,575 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:456370x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:45,577 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45637-0x1019405901c000b connected 2023-07-23 21:10:45,577 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:10:45,578 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-23 21:10:45,579 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:45,579 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45637 2023-07-23 21:10:45,580 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45637 2023-07-23 21:10:45,582 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45637 2023-07-23 21:10:45,586 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45637 2023-07-23 21:10:45,589 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45637 2023-07-23 21:10:45,592 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:45,592 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:45,592 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:45,593 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:10:45,593 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:45,593 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:45,593 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:45,594 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 43765 2023-07-23 21:10:45,594 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:45,601 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:45,601 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f0e7505{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:45,601 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:45,602 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@15a0a997{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:45,730 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:45,731 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:45,731 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:45,732 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:10:45,734 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:45,735 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4c1ac46c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-43765-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5374738148792966724/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:45,737 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@4c813439{HTTP/1.1, (http/1.1)}{0.0.0.0:43765} 2023-07-23 21:10:45,737 INFO [Listener at localhost/38995] server.Server(415): Started @11537ms 2023-07-23 21:10:45,742 INFO [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:10:45,743 DEBUG [RS:3;jenkins-hbase4:45637] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:10:45,746 DEBUG [RS:3;jenkins-hbase4:45637] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:10:45,746 DEBUG [RS:3;jenkins-hbase4:45637] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:10:45,752 DEBUG [RS:3;jenkins-hbase4:45637] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:10:45,754 DEBUG [RS:3;jenkins-hbase4:45637] zookeeper.ReadOnlyZKClient(139): Connect 0x0633ec82 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:45,760 DEBUG [RS:3;jenkins-hbase4:45637] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8f41234, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:45,761 DEBUG [RS:3;jenkins-hbase4:45637] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ccb87f2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:45,774 DEBUG [RS:3;jenkins-hbase4:45637] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:45637 2023-07-23 21:10:45,774 INFO [RS:3;jenkins-hbase4:45637] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:10:45,774 INFO [RS:3;jenkins-hbase4:45637] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:10:45,774 DEBUG [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:10:45,775 INFO [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35573,1690146639994 with isa=jenkins-hbase4.apache.org/172.31.14.131:45637, startcode=1690146645550 2023-07-23 21:10:45,775 DEBUG [RS:3;jenkins-hbase4:45637] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:10:45,779 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34003, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:10:45,780 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35573] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:45,780 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:45,781 DEBUG [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:10:45,781 DEBUG [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:10:45,781 DEBUG [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41339 2023-07-23 21:10:45,786 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:45,786 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:45,787 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:45,786 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:45,787 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:45,788 DEBUG [RS:3;jenkins-hbase4:45637] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:45,788 WARN [RS:3;jenkins-hbase4:45637] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:45,788 INFO [RS:3;jenkins-hbase4:45637] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:45,788 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 21:10:45,788 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45637,1690146645550] 2023-07-23 21:10:45,788 DEBUG [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:45,788 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:45,789 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:45,789 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:45,796 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:45,796 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:45,796 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-23 21:10:45,796 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:45,797 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:45,797 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:45,797 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:45,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:45,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:45,799 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:45,803 DEBUG [RS:3;jenkins-hbase4:45637] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:45,803 DEBUG [RS:3;jenkins-hbase4:45637] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:45,804 DEBUG [RS:3;jenkins-hbase4:45637] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:45,804 DEBUG [RS:3;jenkins-hbase4:45637] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:45,805 DEBUG [RS:3;jenkins-hbase4:45637] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:10:45,806 INFO [RS:3;jenkins-hbase4:45637] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:10:45,808 INFO [RS:3;jenkins-hbase4:45637] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:10:45,808 INFO [RS:3;jenkins-hbase4:45637] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:10:45,808 INFO [RS:3;jenkins-hbase4:45637] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:45,808 INFO [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:10:45,811 INFO [RS:3;jenkins-hbase4:45637] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:45,811 DEBUG [RS:3;jenkins-hbase4:45637] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:45,811 DEBUG [RS:3;jenkins-hbase4:45637] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:45,811 DEBUG [RS:3;jenkins-hbase4:45637] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:45,811 DEBUG [RS:3;jenkins-hbase4:45637] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:45,811 DEBUG [RS:3;jenkins-hbase4:45637] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:45,811 DEBUG [RS:3;jenkins-hbase4:45637] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:45,811 DEBUG [RS:3;jenkins-hbase4:45637] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:45,811 DEBUG [RS:3;jenkins-hbase4:45637] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:45,811 DEBUG [RS:3;jenkins-hbase4:45637] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:45,811 DEBUG [RS:3;jenkins-hbase4:45637] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:45,817 INFO [RS:3;jenkins-hbase4:45637] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:45,817 INFO [RS:3;jenkins-hbase4:45637] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:45,818 INFO [RS:3;jenkins-hbase4:45637] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:45,831 INFO [RS:3;jenkins-hbase4:45637] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:10:45,831 INFO [RS:3;jenkins-hbase4:45637] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45637,1690146645550-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:45,842 INFO [RS:3;jenkins-hbase4:45637] regionserver.Replication(203): jenkins-hbase4.apache.org,45637,1690146645550 started 2023-07-23 21:10:45,842 INFO [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45637,1690146645550, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45637, sessionid=0x1019405901c000b 2023-07-23 21:10:45,842 DEBUG [RS:3;jenkins-hbase4:45637] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:10:45,842 DEBUG [RS:3;jenkins-hbase4:45637] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:45,842 DEBUG [RS:3;jenkins-hbase4:45637] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45637,1690146645550' 2023-07-23 21:10:45,842 DEBUG [RS:3;jenkins-hbase4:45637] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:10:45,843 DEBUG [RS:3;jenkins-hbase4:45637] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:10:45,843 DEBUG [RS:3;jenkins-hbase4:45637] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:10:45,844 DEBUG [RS:3;jenkins-hbase4:45637] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:10:45,844 DEBUG [RS:3;jenkins-hbase4:45637] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:45,844 DEBUG [RS:3;jenkins-hbase4:45637] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45637,1690146645550' 2023-07-23 21:10:45,844 DEBUG [RS:3;jenkins-hbase4:45637] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:45,844 DEBUG [RS:3;jenkins-hbase4:45637] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:45,844 DEBUG [RS:3;jenkins-hbase4:45637] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:10:45,844 INFO [RS:3;jenkins-hbase4:45637] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:10:45,844 INFO [RS:3;jenkins-hbase4:45637] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:10:45,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:45,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:45,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:45,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:45,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:45,861 DEBUG [hconnection-0x38dc196c-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:45,865 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45644, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:45,871 DEBUG [hconnection-0x38dc196c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:45,873 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49830, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:45,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:45,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:45,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:45,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:45,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:46040 deadline: 1690147845885, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:45,887 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:45,889 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:45,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:45,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:45,890 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36963, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:45,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:45,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:45,897 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBasics(260): testClearNotProcessedDeadServer 2023-07-23 21:10:45,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:45,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:45,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup deadServerGroup 2023-07-23 21:10:45,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:45,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:45,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-23 21:10:45,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:45,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:45,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:45,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:45,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36963] to rsgroup deadServerGroup 2023-07-23 21:10:45,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:45,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:45,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-23 21:10:45,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:45,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(238): Moving server region 674d6b4e3c5d6a4f0860e9c874b3e183, which do not belong to RSGroup deadServerGroup 2023-07-23 21:10:45,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:45,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:45,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:45,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:45,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:45,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, REOPEN/MOVE 2023-07-23 21:10:45,934 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, REOPEN/MOVE 2023-07-23 21:10:45,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 21:10:45,936 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:45,937 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146645936"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146645936"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146645936"}]},"ts":"1690146645936"} 2023-07-23 21:10:45,940 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,36963,1690146641995}] 2023-07-23 21:10:45,951 INFO [RS:3;jenkins-hbase4:45637] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45637%2C1690146645550, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45637,1690146645550, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:10:45,989 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:10:45,993 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:10:45,993 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:10:45,997 INFO [RS:3;jenkins-hbase4:45637] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45637,1690146645550/jenkins-hbase4.apache.org%2C45637%2C1690146645550.1690146645953 2023-07-23 21:10:45,998 DEBUG [RS:3;jenkins-hbase4:45637] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK]] 2023-07-23 21:10:46,110 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:46,111 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 674d6b4e3c5d6a4f0860e9c874b3e183, disabling compactions & flushes 2023-07-23 21:10:46,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:46,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:46,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. after waiting 0 ms 2023-07-23 21:10:46,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:46,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 674d6b4e3c5d6a4f0860e9c874b3e183 1/1 column families, dataSize=1.27 KB heapSize=2.24 KB 2023-07-23 21:10:46,207 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.27 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/.tmp/m/c559075bcb8741e4859507bb7fb7cfc8 2023-07-23 21:10:46,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/.tmp/m/c559075bcb8741e4859507bb7fb7cfc8 as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/c559075bcb8741e4859507bb7fb7cfc8 2023-07-23 21:10:46,272 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/c559075bcb8741e4859507bb7fb7cfc8, entries=3, sequenceid=9, filesize=5.1 K 2023-07-23 21:10:46,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.27 KB/1298, heapSize ~2.23 KB/2280, currentSize=0 B/0 for 674d6b4e3c5d6a4f0860e9c874b3e183 in 162ms, sequenceid=9, compaction requested=false 2023-07-23 21:10:46,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-23 21:10:46,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-23 21:10:46,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:10:46,291 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:46,291 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 674d6b4e3c5d6a4f0860e9c874b3e183: 2023-07-23 21:10:46,291 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 674d6b4e3c5d6a4f0860e9c874b3e183 move to jenkins-hbase4.apache.org,45637,1690146645550 record at close sequenceid=9 2023-07-23 21:10:46,294 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:46,295 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=CLOSED 2023-07-23 21:10:46,295 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146646295"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146646295"}]},"ts":"1690146646295"} 2023-07-23 21:10:46,300 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-23 21:10:46,300 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,36963,1690146641995 in 357 msec 2023-07-23 21:10:46,301 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45637,1690146645550; forceNewPlan=false, retain=false 2023-07-23 21:10:46,451 INFO [jenkins-hbase4:35573] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:46,452 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:46,452 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146646452"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146646452"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146646452"}]},"ts":"1690146646452"} 2023-07-23 21:10:46,455 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,45637,1690146645550}] 2023-07-23 21:10:46,608 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:46,609 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:46,613 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49788, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:46,619 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:46,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 674d6b4e3c5d6a4f0860e9c874b3e183, NAME => 'hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:46,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:10:46,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. service=MultiRowMutationService 2023-07-23 21:10:46,620 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 21:10:46,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:46,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:46,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:46,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:46,628 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:46,630 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m 2023-07-23 21:10:46,630 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m 2023-07-23 21:10:46,631 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 674d6b4e3c5d6a4f0860e9c874b3e183 columnFamilyName m 2023-07-23 21:10:46,642 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/c559075bcb8741e4859507bb7fb7cfc8 2023-07-23 21:10:46,643 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(310): Store=674d6b4e3c5d6a4f0860e9c874b3e183/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:46,645 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:46,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:46,653 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:46,655 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 674d6b4e3c5d6a4f0860e9c874b3e183; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1649d1d1, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:46,655 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 674d6b4e3c5d6a4f0860e9c874b3e183: 2023-07-23 21:10:46,657 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183., pid=14, masterSystemTime=1690146646608 2023-07-23 21:10:46,662 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:46,663 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:46,664 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:46,664 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146646664"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146646664"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146646664"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146646664"}]},"ts":"1690146646664"} 2023-07-23 21:10:46,671 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-23 21:10:46,671 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,45637,1690146645550 in 212 msec 2023-07-23 21:10:46,673 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, REOPEN/MOVE in 740 msec 2023-07-23 21:10:46,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-23 21:10:46,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36963,1690146641995] are moved back to default 2023-07-23 21:10:46,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(438): Move servers done: default => deadServerGroup 2023-07-23 21:10:46,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:46,937 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36963] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:49830 deadline: 1690146706937, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45637 startCode=1690146645550. As of locationSeqNum=9. 2023-07-23 21:10:47,043 DEBUG [hconnection-0x38dc196c-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:47,045 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49804, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:47,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:47,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:47,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-23 21:10:47,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:47,072 DEBUG [Listener at localhost/38995] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:47,076 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49836, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:47,076 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36963] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36963,1690146641995' ***** 2023-07-23 21:10:47,076 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36963] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x17f15d9f 2023-07-23 21:10:47,076 INFO [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:47,082 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:47,083 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:10:47,084 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:47,089 INFO [RS:1;jenkins-hbase4:36963] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7c3ddaff{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:47,093 INFO [RS:1;jenkins-hbase4:36963] server.AbstractConnector(383): Stopped ServerConnector@4e28ad7f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:47,093 INFO [RS:1;jenkins-hbase4:36963] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:47,094 INFO [RS:1;jenkins-hbase4:36963] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5dda6698{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:47,094 INFO [RS:1;jenkins-hbase4:36963] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@143aeb70{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:47,097 INFO [RS:1;jenkins-hbase4:36963] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:10:47,097 INFO [RS:1;jenkins-hbase4:36963] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:10:47,097 INFO [RS:1;jenkins-hbase4:36963] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:10:47,097 INFO [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:47,097 DEBUG [RS:1;jenkins-hbase4:36963] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1b7d15cf to 127.0.0.1:59847 2023-07-23 21:10:47,097 DEBUG [RS:1;jenkins-hbase4:36963] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:47,097 INFO [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36963,1690146641995; all regions closed. 2023-07-23 21:10:47,111 DEBUG [RS:1;jenkins-hbase4:36963] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:10:47,111 INFO [RS:1;jenkins-hbase4:36963] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36963%2C1690146641995:(num 1690146644042) 2023-07-23 21:10:47,111 DEBUG [RS:1;jenkins-hbase4:36963] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:47,112 INFO [RS:1;jenkins-hbase4:36963] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:47,112 INFO [RS:1;jenkins-hbase4:36963] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:47,112 INFO [RS:1;jenkins-hbase4:36963] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:10:47,112 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:47,112 INFO [RS:1;jenkins-hbase4:36963] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:10:47,113 INFO [RS:1;jenkins-hbase4:36963] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:10:47,114 INFO [RS:1;jenkins-hbase4:36963] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36963 2023-07-23 21:10:47,126 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:47,126 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:47,126 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:47,126 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:47,126 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:47,126 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:47,126 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:47,126 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 2023-07-23 21:10:47,127 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:47,127 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36963,1690146641995] 2023-07-23 21:10:47,128 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36963,1690146641995; numProcessing=1 2023-07-23 21:10:47,130 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:47,130 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:47,131 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:47,132 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:47,132 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:47,132 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36963,1690146641995 already deleted, retry=false 2023-07-23 21:10:47,132 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:47,132 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,36963,1690146641995 on jenkins-hbase4.apache.org,35573,1690146639994 2023-07-23 21:10:47,133 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:47,133 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:47,133 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:47,134 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 znode expired, triggering replicatorRemoved event 2023-07-23 21:10:47,134 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 znode expired, triggering replicatorRemoved event 2023-07-23 21:10:47,134 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,36963,1690146641995 znode expired, triggering replicatorRemoved event 2023-07-23 21:10:47,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:47,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:47,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:47,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:47,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:47,138 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:47,138 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:47,138 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:47,138 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:47,142 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,36963,1690146641995, splitWal=true, meta=false 2023-07-23 21:10:47,142 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=15 for jenkins-hbase4.apache.org,36963,1690146641995 (carryingMeta=false) jenkins-hbase4.apache.org,36963,1690146641995/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@6728c4a8[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-23 21:10:47,143 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:47,145 WARN [RS-EventLoopGroup-5-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:36963 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:36963 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:10:47,146 DEBUG [RS-EventLoopGroup-5-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:36963 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:36963 2023-07-23 21:10:47,148 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=15, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,36963,1690146641995, splitWal=true, meta=false 2023-07-23 21:10:47,150 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,36963,1690146641995 had 0 regions 2023-07-23 21:10:47,152 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=15, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,36963,1690146641995, splitWal=true, meta=false, isMeta: false 2023-07-23 21:10:47,155 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36963,1690146641995-splitting 2023-07-23 21:10:47,156 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36963,1690146641995-splitting dir is empty, no logs to split. 2023-07-23 21:10:47,156 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,36963,1690146641995 WAL count=0, meta=false 2023-07-23 21:10:47,161 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36963,1690146641995-splitting dir is empty, no logs to split. 2023-07-23 21:10:47,161 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,36963,1690146641995 WAL count=0, meta=false 2023-07-23 21:10:47,161 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,36963,1690146641995 WAL splitting is done? wals=0, meta=false 2023-07-23 21:10:47,166 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,36963,1690146641995 failed, ignore...File hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36963,1690146641995-splitting does not exist. 2023-07-23 21:10:47,169 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,36963,1690146641995 after splitting done 2023-07-23 21:10:47,169 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase4.apache.org,36963,1690146641995 from processing; numProcessing=0 2023-07-23 21:10:47,172 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,36963,1690146641995, splitWal=true, meta=false in 34 msec 2023-07-23 21:10:47,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-23 21:10:47,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:47,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:47,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:47,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:47,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:47,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:47,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:47,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:47,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:47,252 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:47,255 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49808, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:47,258 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:47,259 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:47,259 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-23 21:10:47,260 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:47,263 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 21:10:47,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:47,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-23 21:10:47,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 21:10:47,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:47,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:47,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:47,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:47,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36963] to rsgroup default 2023-07-23 21:10:47,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(258): Dropping jenkins-hbase4.apache.org:36963 during move-to-default rsgroup because not online 2023-07-23 21:10:47,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:47,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-23 21:10:47,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:47,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group deadServerGroup, current retry=0 2023-07-23 21:10:47,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(261): All regions from [] are moved back to deadServerGroup 2023-07-23 21:10:47,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(438): Move servers done: deadServerGroup => default 2023-07-23 21:10:47,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:47,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup deadServerGroup 2023-07-23 21:10:47,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:47,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:47,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:47,300 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:47,300 INFO [RS:1;jenkins-hbase4:36963] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36963,1690146641995; zookeeper connection closed. 2023-07-23 21:10:47,300 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36963-0x1019405901c0002, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:47,300 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2252ae99] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2252ae99 2023-07-23 21:10:47,302 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-23 21:10:47,320 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:47,320 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:47,321 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:47,321 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:47,321 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:47,321 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:47,321 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:47,325 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42335 2023-07-23 21:10:47,325 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:10:47,327 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:10:47,327 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:47,329 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:47,330 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42335 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:10:47,334 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:423350x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:47,336 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42335-0x1019405901c000d connected 2023-07-23 21:10:47,336 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(162): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:10:47,337 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(162): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-23 21:10:47,338 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:47,338 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42335 2023-07-23 21:10:47,338 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42335 2023-07-23 21:10:47,342 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42335 2023-07-23 21:10:47,343 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42335 2023-07-23 21:10:47,343 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42335 2023-07-23 21:10:47,346 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:47,346 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:47,346 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:47,346 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:10:47,346 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:47,346 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:47,347 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:47,347 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 37113 2023-07-23 21:10:47,347 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:47,350 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:47,351 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5d44147d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:47,351 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:47,352 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@691a3c28{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:47,469 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:47,470 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:47,470 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:47,470 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:10:47,472 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:47,473 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1835fc4c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-37113-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2947292071971521880/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:47,474 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@1d9f35c9{HTTP/1.1, (http/1.1)}{0.0.0.0:37113} 2023-07-23 21:10:47,474 INFO [Listener at localhost/38995] server.Server(415): Started @13274ms 2023-07-23 21:10:47,478 INFO [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:10:47,480 DEBUG [RS:4;jenkins-hbase4:42335] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:10:47,484 DEBUG [RS:4;jenkins-hbase4:42335] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:10:47,484 DEBUG [RS:4;jenkins-hbase4:42335] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:10:47,494 DEBUG [RS:4;jenkins-hbase4:42335] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:10:47,496 DEBUG [RS:4;jenkins-hbase4:42335] zookeeper.ReadOnlyZKClient(139): Connect 0x1088b565 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:47,501 DEBUG [RS:4;jenkins-hbase4:42335] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e4f4904, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:47,501 DEBUG [RS:4;jenkins-hbase4:42335] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1a70de15, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:47,510 DEBUG [RS:4;jenkins-hbase4:42335] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase4:42335 2023-07-23 21:10:47,510 INFO [RS:4;jenkins-hbase4:42335] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:10:47,511 INFO [RS:4;jenkins-hbase4:42335] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:10:47,511 DEBUG [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:10:47,511 INFO [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35573,1690146639994 with isa=jenkins-hbase4.apache.org/172.31.14.131:42335, startcode=1690146647320 2023-07-23 21:10:47,511 DEBUG [RS:4;jenkins-hbase4:42335] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:10:47,514 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35369, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:10:47,514 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35573] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:47,514 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:47,515 DEBUG [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:10:47,515 DEBUG [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:10:47,515 DEBUG [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41339 2023-07-23 21:10:47,517 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:47,517 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:47,517 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:47,517 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:47,518 DEBUG [RS:4;jenkins-hbase4:42335] zookeeper.ZKUtil(162): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:47,518 WARN [RS:4;jenkins-hbase4:42335] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:47,518 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42335,1690146647320] 2023-07-23 21:10:47,518 INFO [RS:4;jenkins-hbase4:42335] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:47,518 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:47,518 DEBUG [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:47,519 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:47,519 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:47,519 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:47,521 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:47,522 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:47,522 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:47,523 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:47,523 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:47,524 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:47,526 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 21:10:47,527 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:47,527 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:47,527 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:47,529 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35573,1690146639994] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-23 21:10:47,529 DEBUG [RS:4;jenkins-hbase4:42335] zookeeper.ZKUtil(162): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:47,529 DEBUG [RS:4;jenkins-hbase4:42335] zookeeper.ZKUtil(162): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:47,530 DEBUG [RS:4;jenkins-hbase4:42335] zookeeper.ZKUtil(162): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:47,530 DEBUG [RS:4;jenkins-hbase4:42335] zookeeper.ZKUtil(162): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:47,532 DEBUG [RS:4;jenkins-hbase4:42335] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:10:47,532 INFO [RS:4;jenkins-hbase4:42335] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:10:47,534 INFO [RS:4;jenkins-hbase4:42335] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:10:47,542 INFO [RS:4;jenkins-hbase4:42335] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:10:47,543 INFO [RS:4;jenkins-hbase4:42335] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:47,543 INFO [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:10:47,545 INFO [RS:4;jenkins-hbase4:42335] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:47,546 DEBUG [RS:4;jenkins-hbase4:42335] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:47,546 DEBUG [RS:4;jenkins-hbase4:42335] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:47,546 DEBUG [RS:4;jenkins-hbase4:42335] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:47,546 DEBUG [RS:4;jenkins-hbase4:42335] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:47,546 DEBUG [RS:4;jenkins-hbase4:42335] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:47,546 DEBUG [RS:4;jenkins-hbase4:42335] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:47,546 DEBUG [RS:4;jenkins-hbase4:42335] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:47,546 DEBUG [RS:4;jenkins-hbase4:42335] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:47,546 DEBUG [RS:4;jenkins-hbase4:42335] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:47,546 DEBUG [RS:4;jenkins-hbase4:42335] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:47,548 INFO [RS:4;jenkins-hbase4:42335] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:47,548 INFO [RS:4;jenkins-hbase4:42335] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:47,548 INFO [RS:4;jenkins-hbase4:42335] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:47,562 INFO [RS:4;jenkins-hbase4:42335] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:10:47,562 INFO [RS:4;jenkins-hbase4:42335] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42335,1690146647320-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:47,574 INFO [RS:4;jenkins-hbase4:42335] regionserver.Replication(203): jenkins-hbase4.apache.org,42335,1690146647320 started 2023-07-23 21:10:47,574 INFO [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42335,1690146647320, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42335, sessionid=0x1019405901c000d 2023-07-23 21:10:47,574 DEBUG [RS:4;jenkins-hbase4:42335] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:10:47,574 DEBUG [RS:4;jenkins-hbase4:42335] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:47,574 DEBUG [RS:4;jenkins-hbase4:42335] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42335,1690146647320' 2023-07-23 21:10:47,574 DEBUG [RS:4;jenkins-hbase4:42335] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:10:47,574 DEBUG [RS:4;jenkins-hbase4:42335] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:10:47,575 DEBUG [RS:4;jenkins-hbase4:42335] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:10:47,575 DEBUG [RS:4;jenkins-hbase4:42335] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:10:47,575 DEBUG [RS:4;jenkins-hbase4:42335] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:47,575 DEBUG [RS:4;jenkins-hbase4:42335] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42335,1690146647320' 2023-07-23 21:10:47,575 DEBUG [RS:4;jenkins-hbase4:42335] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:47,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:47,575 DEBUG [RS:4;jenkins-hbase4:42335] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:47,576 DEBUG [RS:4;jenkins-hbase4:42335] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:10:47,576 INFO [RS:4;jenkins-hbase4:42335] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:10:47,576 INFO [RS:4;jenkins-hbase4:42335] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:10:47,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:47,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:47,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:47,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:47,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:47,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:47,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:47,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:47,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 69 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:46040 deadline: 1690147847591, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:47,593 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:47,594 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:47,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:47,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:47,596 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:47,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:47,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:47,622 INFO [Listener at localhost/38995] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=483 (was 425) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1984174199_17 at /127.0.0.1:33866 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x1088b565-SendThread(127.0.0.1:59847) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:3;jenkins-hbase4:45637-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp6828122-726 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/366059715.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-765447770_17 at /127.0.0.1:52636 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x0633ec82-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x0633ec82 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/527689089.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x1088b565-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x0633ec82-SendThread(127.0.0.1:59847) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1485033198) connection to localhost/127.0.0.1:32841 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1546823403-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:32841 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914-prefix:jenkins-hbase4.apache.org,45637,1690146645550 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1546823403-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp6828122-728 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1984174199_17 at /127.0.0.1:52680 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:42335Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1984174199_17 at /127.0.0.1:44478 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1546823403-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x1088b565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/527689089.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-5b773554-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1485033198) connection to localhost/127.0.0.1:32841 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_301319210_17 at /127.0.0.1:44502 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:45637Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp6828122-729 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1546823403-637 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/366059715.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1984174199_17 at /127.0.0.1:33838 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1546823403-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp6828122-731 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-metaLookup-shared--pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:4;jenkins-hbase4:42335 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1546823403-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp6828122-727-acceptor-0@407c305c-ServerConnector@1d9f35c9{HTTP/1.1, (http/1.1)}{0.0.0.0:37113} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-765447770_17 at /127.0.0.1:52688 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1546823403-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp6828122-732 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-4e9b8f1c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:45637 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:4;jenkins-hbase4:42335-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp6828122-733 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp6828122-730 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1546823403-638-acceptor-0@1afc0c73-ServerConnector@4c813439{HTTP/1.1, (http/1.1)}{0.0.0.0:43765} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=731 (was 683) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=504 (was 504), ProcessCount=175 (was 175), AvailableMemoryMB=6027 (was 6087) 2023-07-23 21:10:47,638 INFO [Listener at localhost/38995] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=483, OpenFileDescriptor=731, MaxFileDescriptor=60000, SystemLoadAverage=504, ProcessCount=175, AvailableMemoryMB=6026 2023-07-23 21:10:47,638 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(132): testDefaultNamespaceCreateAndAssign 2023-07-23 21:10:47,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:47,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:47,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:47,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:47,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:47,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:47,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:47,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:47,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:47,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:47,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:47,661 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:47,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:47,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:47,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:47,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:47,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:47,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:47,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:47,679 INFO [RS:4;jenkins-hbase4:42335] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42335%2C1690146647320, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42335,1690146647320, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:10:47,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:47,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:47,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 97 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:46040 deadline: 1690147847680, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:47,681 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:47,683 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:47,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:47,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:47,685 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:47,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:47,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:47,686 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBasics(180): testDefaultNamespaceCreateAndAssign 2023-07-23 21:10:47,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'default', hbase.rsgroup.name => 'default'} 2023-07-23 21:10:47,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=default 2023-07-23 21:10:47,705 DEBUG [RS-EventLoopGroup-8-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:10:47,706 DEBUG [RS-EventLoopGroup-8-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:10:47,706 DEBUG [RS-EventLoopGroup-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:10:47,710 INFO [RS:4;jenkins-hbase4:42335] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42335,1690146647320/jenkins-hbase4.apache.org%2C42335%2C1690146647320.1690146647681 2023-07-23 21:10:47,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-23 21:10:47,718 DEBUG [RS:4;jenkins-hbase4:42335] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK]] 2023-07-23 21:10:47,726 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 21:10:47,728 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; ModifyNamespaceProcedure, namespace=default in 33 msec 2023-07-23 21:10:47,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-23 21:10:47,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:47,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=17, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:10:47,836 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:47,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndAssign" procId is: 17 2023-07-23 21:10:47,841 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:47,841 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:47,842 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:47,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-23 21:10:47,845 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:47,847 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:47,848 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1 empty. 2023-07-23 21:10:47,848 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:47,848 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-23 21:10:47,874 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:47,876 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => b4d412422b474db6bfe1047e395f19f1, NAME => 'Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:47,892 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:47,892 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing b4d412422b474db6bfe1047e395f19f1, disabling compactions & flushes 2023-07-23 21:10:47,892 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. 2023-07-23 21:10:47,893 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. 2023-07-23 21:10:47,893 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. after waiting 0 ms 2023-07-23 21:10:47,893 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. 2023-07-23 21:10:47,893 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. 2023-07-23 21:10:47,893 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for b4d412422b474db6bfe1047e395f19f1: 2023-07-23 21:10:47,896 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:47,898 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146647897"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146647897"}]},"ts":"1690146647897"} 2023-07-23 21:10:47,900 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:47,901 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:47,901 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146647901"}]},"ts":"1690146647901"} 2023-07-23 21:10:47,902 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-23 21:10:47,906 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:47,906 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:47,906 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:47,906 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:47,906 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-23 21:10:47,906 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:47,906 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=b4d412422b474db6bfe1047e395f19f1, ASSIGN}] 2023-07-23 21:10:47,908 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=17, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=b4d412422b474db6bfe1047e395f19f1, ASSIGN 2023-07-23 21:10:47,909 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=17, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=b4d412422b474db6bfe1047e395f19f1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42335,1690146647320; forceNewPlan=false, retain=false 2023-07-23 21:10:47,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-23 21:10:48,059 INFO [jenkins-hbase4:35573] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:48,061 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=b4d412422b474db6bfe1047e395f19f1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:48,061 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146648061"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146648061"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146648061"}]},"ts":"1690146648061"} 2023-07-23 21:10:48,063 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE; OpenRegionProcedure b4d412422b474db6bfe1047e395f19f1, server=jenkins-hbase4.apache.org,42335,1690146647320}] 2023-07-23 21:10:48,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-23 21:10:48,217 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:48,217 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:48,222 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33886, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:48,227 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. 2023-07-23 21:10:48,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b4d412422b474db6bfe1047e395f19f1, NAME => 'Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:48,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:48,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:48,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:48,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:48,230 INFO [StoreOpener-b4d412422b474db6bfe1047e395f19f1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:48,232 DEBUG [StoreOpener-b4d412422b474db6bfe1047e395f19f1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1/f 2023-07-23 21:10:48,232 DEBUG [StoreOpener-b4d412422b474db6bfe1047e395f19f1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1/f 2023-07-23 21:10:48,233 INFO [StoreOpener-b4d412422b474db6bfe1047e395f19f1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b4d412422b474db6bfe1047e395f19f1 columnFamilyName f 2023-07-23 21:10:48,234 INFO [StoreOpener-b4d412422b474db6bfe1047e395f19f1-1] regionserver.HStore(310): Store=b4d412422b474db6bfe1047e395f19f1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:48,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:48,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:48,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:48,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:48,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b4d412422b474db6bfe1047e395f19f1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11403146880, jitterRate=0.062000811100006104}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:48,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b4d412422b474db6bfe1047e395f19f1: 2023-07-23 21:10:48,250 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1., pid=19, masterSystemTime=1690146648217 2023-07-23 21:10:48,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. 2023-07-23 21:10:48,257 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. 2023-07-23 21:10:48,258 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=b4d412422b474db6bfe1047e395f19f1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:48,258 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146648258"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146648258"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146648258"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146648258"}]},"ts":"1690146648258"} 2023-07-23 21:10:48,264 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=18 2023-07-23 21:10:48,264 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; OpenRegionProcedure b4d412422b474db6bfe1047e395f19f1, server=jenkins-hbase4.apache.org,42335,1690146647320 in 198 msec 2023-07-23 21:10:48,269 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-23 21:10:48,270 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=b4d412422b474db6bfe1047e395f19f1, ASSIGN in 358 msec 2023-07-23 21:10:48,270 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:48,271 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146648271"}]},"ts":"1690146648271"} 2023-07-23 21:10:48,273 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-23 21:10:48,279 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:48,281 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign in 446 msec 2023-07-23 21:10:48,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-23 21:10:48,449 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndAssign, procId: 17 completed 2023-07-23 21:10:48,449 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:48,454 DEBUG [Listener at localhost/38995] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:48,456 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33892, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:48,459 DEBUG [Listener at localhost/38995] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:48,461 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45660, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:48,462 DEBUG [Listener at localhost/38995] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:48,464 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49810, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:48,465 DEBUG [Listener at localhost/38995] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:48,466 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55690, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:48,471 INFO [Listener at localhost/38995] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndAssign 2023-07-23 21:10:48,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCreateAndAssign 2023-07-23 21:10:48,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:10:48,488 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146648488"}]},"ts":"1690146648488"} 2023-07-23 21:10:48,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 21:10:48,490 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-23 21:10:48,492 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCreateAndAssign to state=DISABLING 2023-07-23 21:10:48,495 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=b4d412422b474db6bfe1047e395f19f1, UNASSIGN}] 2023-07-23 21:10:48,497 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=b4d412422b474db6bfe1047e395f19f1, UNASSIGN 2023-07-23 21:10:48,498 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=b4d412422b474db6bfe1047e395f19f1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:48,498 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146648498"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146648498"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146648498"}]},"ts":"1690146648498"} 2023-07-23 21:10:48,500 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure b4d412422b474db6bfe1047e395f19f1, server=jenkins-hbase4.apache.org,42335,1690146647320}] 2023-07-23 21:10:48,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 21:10:48,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:48,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b4d412422b474db6bfe1047e395f19f1, disabling compactions & flushes 2023-07-23 21:10:48,654 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. 2023-07-23 21:10:48,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. 2023-07-23 21:10:48,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. after waiting 0 ms 2023-07-23 21:10:48,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. 2023-07-23 21:10:48,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:48,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1. 2023-07-23 21:10:48,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b4d412422b474db6bfe1047e395f19f1: 2023-07-23 21:10:48,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:48,666 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=b4d412422b474db6bfe1047e395f19f1, regionState=CLOSED 2023-07-23 21:10:48,666 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146648666"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146648666"}]},"ts":"1690146648666"} 2023-07-23 21:10:48,670 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-23 21:10:48,670 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure b4d412422b474db6bfe1047e395f19f1, server=jenkins-hbase4.apache.org,42335,1690146647320 in 168 msec 2023-07-23 21:10:48,672 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-23 21:10:48,672 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=b4d412422b474db6bfe1047e395f19f1, UNASSIGN in 175 msec 2023-07-23 21:10:48,674 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146648673"}]},"ts":"1690146648673"} 2023-07-23 21:10:48,675 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-23 21:10:48,677 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCreateAndAssign to state=DISABLED 2023-07-23 21:10:48,680 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign in 200 msec 2023-07-23 21:10:48,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 21:10:48,794 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndAssign, procId: 20 completed 2023-07-23 21:10:48,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCreateAndAssign 2023-07-23 21:10:48,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:10:48,809 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:10:48,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndAssign' from rsgroup 'default' 2023-07-23 21:10:48,811 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:10:48,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:48,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:48,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:48,819 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:48,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-23 21:10:48,823 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1/recovered.edits] 2023-07-23 21:10:48,832 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1/recovered.edits/4.seqid 2023-07-23 21:10:48,833 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndAssign/b4d412422b474db6bfe1047e395f19f1 2023-07-23 21:10:48,833 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-23 21:10:48,837 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:10:48,865 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndAssign from hbase:meta 2023-07-23 21:10:48,910 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndAssign' descriptor. 2023-07-23 21:10:48,913 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:10:48,913 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndAssign' from region states. 2023-07-23 21:10:48,913 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146648913"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:48,915 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 21:10:48,916 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b4d412422b474db6bfe1047e395f19f1, NAME => 'Group_testCreateAndAssign,,1690146647830.b4d412422b474db6bfe1047e395f19f1.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 21:10:48,916 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndAssign' as deleted. 2023-07-23 21:10:48,916 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146648916"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:48,918 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndAssign state from META 2023-07-23 21:10:48,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-23 21:10:48,921 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:10:48,923 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign in 121 msec 2023-07-23 21:10:49,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-23 21:10:49,123 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndAssign, procId: 23 completed 2023-07-23 21:10:49,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:49,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:49,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:49,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:49,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:49,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:49,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:49,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:49,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:49,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:49,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:49,141 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:49,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:49,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:49,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:49,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:49,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:49,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:49,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:49,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:49,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:49,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 163 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147849154, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:49,155 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:49,157 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:49,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:49,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:49,158 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:49,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:49,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:49,177 INFO [Listener at localhost/38995] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=496 (was 483) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914-prefix:jenkins-hbase4.apache.org,42335,1690146647320 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1206779787_17 at /127.0.0.1:33874 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741842_1018] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1206779787_17 at /127.0.0.1:52698 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741842_1018] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741842_1018, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741842_1018, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1206779787_17 at /127.0.0.1:44512 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741842_1018] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741842_1018, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1206779787_17 at /127.0.0.1:44502 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c44466f-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:32841 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=759 (was 731) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=479 (was 504), ProcessCount=173 (was 175), AvailableMemoryMB=8147 (was 6026) - AvailableMemoryMB LEAK? - 2023-07-23 21:10:49,197 INFO [Listener at localhost/38995] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=496, OpenFileDescriptor=759, MaxFileDescriptor=60000, SystemLoadAverage=479, ProcessCount=173, AvailableMemoryMB=8147 2023-07-23 21:10:49,197 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(132): testCreateMultiRegion 2023-07-23 21:10:49,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:49,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:49,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:49,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:49,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:49,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:49,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:49,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:49,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:49,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:49,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:49,218 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:49,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:49,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:49,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:49,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:49,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:49,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:49,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:49,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:49,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:49,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 191 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147849230, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:49,231 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:49,232 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:49,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:49,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:49,234 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:49,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:49,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:49,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:49,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:10:49,241 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:49,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateMultiRegion" procId is: 24 2023-07-23 21:10:49,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-23 21:10:49,243 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:49,244 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:49,244 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:49,252 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:49,262 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:49,262 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:49,262 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:49,262 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:49,262 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:49,263 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:49,263 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:49,262 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:49,263 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84 empty. 2023-07-23 21:10:49,263 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217 empty. 2023-07-23 21:10:49,264 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee empty. 2023-07-23 21:10:49,264 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536 empty. 2023-07-23 21:10:49,264 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0 empty. 2023-07-23 21:10:49,264 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:49,264 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:49,264 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:49,264 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b empty. 2023-07-23 21:10:49,264 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:49,266 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68 empty. 2023-07-23 21:10:49,265 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:49,266 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:49,266 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99 empty. 2023-07-23 21:10:49,266 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:49,265 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:49,265 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4 empty. 2023-07-23 21:10:49,268 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c empty. 2023-07-23 21:10:49,268 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:49,268 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:49,268 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:49,269 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:49,269 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-23 21:10:49,307 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:49,309 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => 134135b7471ca4f427a304c701ae4217, NAME => 'Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:49,309 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => 19231f9db179525f2bc140ae04139a99, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:49,310 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 6cf01962c34d31abe83bc5c26e1f54f4, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:49,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-23 21:10:49,355 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:49,355 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:49,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 6cf01962c34d31abe83bc5c26e1f54f4, disabling compactions & flushes 2023-07-23 21:10:49,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing 19231f9db179525f2bc140ae04139a99, disabling compactions & flushes 2023-07-23 21:10:49,357 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. 2023-07-23 21:10:49,357 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. 2023-07-23 21:10:49,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. 2023-07-23 21:10:49,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. 2023-07-23 21:10:49,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. after waiting 0 ms 2023-07-23 21:10:49,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. after waiting 0 ms 2023-07-23 21:10:49,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. 2023-07-23 21:10:49,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. 2023-07-23 21:10:49,358 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. 2023-07-23 21:10:49,358 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. 2023-07-23 21:10:49,358 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 6cf01962c34d31abe83bc5c26e1f54f4: 2023-07-23 21:10:49,358 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for 19231f9db179525f2bc140ae04139a99: 2023-07-23 21:10:49,358 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 84264fc15b9b146b3a3191af3f7589a0, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:49,359 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => 0533d1e24e45fc02629db77ac654984b, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:49,359 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:49,360 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing 134135b7471ca4f427a304c701ae4217, disabling compactions & flushes 2023-07-23 21:10:49,360 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. 2023-07-23 21:10:49,360 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. 2023-07-23 21:10:49,360 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. after waiting 0 ms 2023-07-23 21:10:49,361 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. 2023-07-23 21:10:49,361 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. 2023-07-23 21:10:49,361 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for 134135b7471ca4f427a304c701ae4217: 2023-07-23 21:10:49,361 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => e4a591b15fcf41c839cb213d14daf536, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:49,396 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:49,397 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 84264fc15b9b146b3a3191af3f7589a0, disabling compactions & flushes 2023-07-23 21:10:49,397 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. 2023-07-23 21:10:49,397 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. 2023-07-23 21:10:49,397 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. after waiting 0 ms 2023-07-23 21:10:49,397 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. 2023-07-23 21:10:49,397 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. 2023-07-23 21:10:49,397 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 84264fc15b9b146b3a3191af3f7589a0: 2023-07-23 21:10:49,398 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 4362896728e8f23b0010c41e1f288c84, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:49,419 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:49,419 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing e4a591b15fcf41c839cb213d14daf536, disabling compactions & flushes 2023-07-23 21:10:49,420 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. 2023-07-23 21:10:49,420 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. 2023-07-23 21:10:49,420 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. after waiting 0 ms 2023-07-23 21:10:49,420 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. 2023-07-23 21:10:49,420 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. 2023-07-23 21:10:49,420 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for e4a591b15fcf41c839cb213d14daf536: 2023-07-23 21:10:49,422 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => a930743917a64f683bb3541e65b4bbee, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:49,456 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:49,460 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 4362896728e8f23b0010c41e1f288c84, disabling compactions & flushes 2023-07-23 21:10:49,461 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. 2023-07-23 21:10:49,461 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. 2023-07-23 21:10:49,461 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. after waiting 0 ms 2023-07-23 21:10:49,461 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. 2023-07-23 21:10:49,461 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. 2023-07-23 21:10:49,461 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 4362896728e8f23b0010c41e1f288c84: 2023-07-23 21:10:49,462 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5c4b2526340ace3ba5d6e7aeab20f20c, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:49,475 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:49,477 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing a930743917a64f683bb3541e65b4bbee, disabling compactions & flushes 2023-07-23 21:10:49,477 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. 2023-07-23 21:10:49,477 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. 2023-07-23 21:10:49,477 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. after waiting 0 ms 2023-07-23 21:10:49,477 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. 2023-07-23 21:10:49,477 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. 2023-07-23 21:10:49,477 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for a930743917a64f683bb3541e65b4bbee: 2023-07-23 21:10:49,478 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => 545ce0982cad2c351e7e32ca135e6c68, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:49,504 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:49,505 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 5c4b2526340ace3ba5d6e7aeab20f20c, disabling compactions & flushes 2023-07-23 21:10:49,505 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. 2023-07-23 21:10:49,505 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. 2023-07-23 21:10:49,505 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. after waiting 0 ms 2023-07-23 21:10:49,505 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. 2023-07-23 21:10:49,505 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. 2023-07-23 21:10:49,505 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 5c4b2526340ace3ba5d6e7aeab20f20c: 2023-07-23 21:10:49,513 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:49,515 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing 545ce0982cad2c351e7e32ca135e6c68, disabling compactions & flushes 2023-07-23 21:10:49,515 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. 2023-07-23 21:10:49,515 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. 2023-07-23 21:10:49,516 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. after waiting 0 ms 2023-07-23 21:10:49,516 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. 2023-07-23 21:10:49,516 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. 2023-07-23 21:10:49,516 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for 545ce0982cad2c351e7e32ca135e6c68: 2023-07-23 21:10:49,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-23 21:10:49,798 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:49,798 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing 0533d1e24e45fc02629db77ac654984b, disabling compactions & flushes 2023-07-23 21:10:49,798 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. 2023-07-23 21:10:49,798 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. 2023-07-23 21:10:49,798 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. after waiting 0 ms 2023-07-23 21:10:49,798 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. 2023-07-23 21:10:49,798 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. 2023-07-23 21:10:49,798 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for 0533d1e24e45fc02629db77ac654984b: 2023-07-23 21:10:49,804 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:49,806 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146649806"}]},"ts":"1690146649806"} 2023-07-23 21:10:49,806 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690146649237.19231f9db179525f2bc140ae04139a99.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146649806"}]},"ts":"1690146649806"} 2023-07-23 21:10:49,806 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146649806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146649806"}]},"ts":"1690146649806"} 2023-07-23 21:10:49,807 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146649806"}]},"ts":"1690146649806"} 2023-07-23 21:10:49,807 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146649806"}]},"ts":"1690146649806"} 2023-07-23 21:10:49,807 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146649806"}]},"ts":"1690146649806"} 2023-07-23 21:10:49,807 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146649806"}]},"ts":"1690146649806"} 2023-07-23 21:10:49,807 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146649806"}]},"ts":"1690146649806"} 2023-07-23 21:10:49,807 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146649806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146649806"}]},"ts":"1690146649806"} 2023-07-23 21:10:49,807 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146649806"}]},"ts":"1690146649806"} 2023-07-23 21:10:49,812 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 10 regions to meta. 2023-07-23 21:10:49,814 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:49,814 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146649814"}]},"ts":"1690146649814"} 2023-07-23 21:10:49,816 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLING in hbase:meta 2023-07-23 21:10:49,820 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:49,821 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:49,821 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:49,821 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:49,821 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-23 21:10:49,821 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:49,821 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=134135b7471ca4f427a304c701ae4217, ASSIGN}, {pid=26, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=6cf01962c34d31abe83bc5c26e1f54f4, ASSIGN}, {pid=27, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=19231f9db179525f2bc140ae04139a99, ASSIGN}, {pid=28, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=84264fc15b9b146b3a3191af3f7589a0, ASSIGN}, {pid=29, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=0533d1e24e45fc02629db77ac654984b, ASSIGN}, {pid=30, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e4a591b15fcf41c839cb213d14daf536, ASSIGN}, {pid=31, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4362896728e8f23b0010c41e1f288c84, ASSIGN}, {pid=32, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a930743917a64f683bb3541e65b4bbee, ASSIGN}, {pid=33, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c4b2526340ace3ba5d6e7aeab20f20c, ASSIGN}, {pid=34, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=545ce0982cad2c351e7e32ca135e6c68, ASSIGN}] 2023-07-23 21:10:49,825 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c4b2526340ace3ba5d6e7aeab20f20c, ASSIGN 2023-07-23 21:10:49,826 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=545ce0982cad2c351e7e32ca135e6c68, ASSIGN 2023-07-23 21:10:49,826 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a930743917a64f683bb3541e65b4bbee, ASSIGN 2023-07-23 21:10:49,826 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4362896728e8f23b0010c41e1f288c84, ASSIGN 2023-07-23 21:10:49,827 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=33, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c4b2526340ace3ba5d6e7aeab20f20c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42335,1690146647320; forceNewPlan=false, retain=false 2023-07-23 21:10:49,827 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e4a591b15fcf41c839cb213d14daf536, ASSIGN 2023-07-23 21:10:49,827 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=31, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4362896728e8f23b0010c41e1f288c84, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45637,1690146645550; forceNewPlan=false, retain=false 2023-07-23 21:10:49,828 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=0533d1e24e45fc02629db77ac654984b, ASSIGN 2023-07-23 21:10:49,828 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=32, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a930743917a64f683bb3541e65b4bbee, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46485,1690146642211; forceNewPlan=false, retain=false 2023-07-23 21:10:49,828 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=34, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=545ce0982cad2c351e7e32ca135e6c68, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42727,1690146641774; forceNewPlan=false, retain=false 2023-07-23 21:10:49,829 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=30, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e4a591b15fcf41c839cb213d14daf536, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42727,1690146641774; forceNewPlan=false, retain=false 2023-07-23 21:10:49,829 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=29, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=0533d1e24e45fc02629db77ac654984b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45637,1690146645550; forceNewPlan=false, retain=false 2023-07-23 21:10:49,829 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=84264fc15b9b146b3a3191af3f7589a0, ASSIGN 2023-07-23 21:10:49,830 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=19231f9db179525f2bc140ae04139a99, ASSIGN 2023-07-23 21:10:49,830 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=6cf01962c34d31abe83bc5c26e1f54f4, ASSIGN 2023-07-23 21:10:49,831 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=134135b7471ca4f427a304c701ae4217, ASSIGN 2023-07-23 21:10:49,831 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=28, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=84264fc15b9b146b3a3191af3f7589a0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42335,1690146647320; forceNewPlan=false, retain=false 2023-07-23 21:10:49,831 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=27, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=19231f9db179525f2bc140ae04139a99, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45637,1690146645550; forceNewPlan=false, retain=false 2023-07-23 21:10:49,831 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=6cf01962c34d31abe83bc5c26e1f54f4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42727,1690146641774; forceNewPlan=false, retain=false 2023-07-23 21:10:49,832 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=134135b7471ca4f427a304c701ae4217, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46485,1690146642211; forceNewPlan=false, retain=false 2023-07-23 21:10:49,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-23 21:10:49,977 INFO [jenkins-hbase4:35573] balancer.BaseLoadBalancer(1545): Reassigned 10 regions. 10 retained the pre-restart assignment. 2023-07-23 21:10:49,983 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=134135b7471ca4f427a304c701ae4217, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:49,983 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=19231f9db179525f2bc140ae04139a99, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:49,983 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=0533d1e24e45fc02629db77ac654984b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:49,983 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690146649237.19231f9db179525f2bc140ae04139a99.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649983"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146649983"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146649983"}]},"ts":"1690146649983"} 2023-07-23 21:10:49,983 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=a930743917a64f683bb3541e65b4bbee, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:49,983 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=4362896728e8f23b0010c41e1f288c84, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:49,983 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649983"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146649983"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146649983"}]},"ts":"1690146649983"} 2023-07-23 21:10:49,983 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649983"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146649983"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146649983"}]},"ts":"1690146649983"} 2023-07-23 21:10:49,983 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649983"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146649983"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146649983"}]},"ts":"1690146649983"} 2023-07-23 21:10:49,983 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146649983"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146649983"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146649983"}]},"ts":"1690146649983"} 2023-07-23 21:10:49,986 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=27, state=RUNNABLE; OpenRegionProcedure 19231f9db179525f2bc140ae04139a99, server=jenkins-hbase4.apache.org,45637,1690146645550}] 2023-07-23 21:10:49,986 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 21:10:49,987 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=32, state=RUNNABLE; OpenRegionProcedure a930743917a64f683bb3541e65b4bbee, server=jenkins-hbase4.apache.org,46485,1690146642211}] 2023-07-23 21:10:49,989 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=29, state=RUNNABLE; OpenRegionProcedure 0533d1e24e45fc02629db77ac654984b, server=jenkins-hbase4.apache.org,45637,1690146645550}] 2023-07-23 21:10:49,990 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=31, state=RUNNABLE; OpenRegionProcedure 4362896728e8f23b0010c41e1f288c84, server=jenkins-hbase4.apache.org,45637,1690146645550}] 2023-07-23 21:10:49,992 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=25, state=RUNNABLE; OpenRegionProcedure 134135b7471ca4f427a304c701ae4217, server=jenkins-hbase4.apache.org,46485,1690146642211}] 2023-07-23 21:10:49,994 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=545ce0982cad2c351e7e32ca135e6c68, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:49,994 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146649993"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146649993"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146649993"}]},"ts":"1690146649993"} 2023-07-23 21:10:49,994 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=e4a591b15fcf41c839cb213d14daf536, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:49,994 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146649994"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146649994"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146649994"}]},"ts":"1690146649994"} 2023-07-23 21:10:49,997 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=34, state=RUNNABLE; OpenRegionProcedure 545ce0982cad2c351e7e32ca135e6c68, server=jenkins-hbase4.apache.org,42727,1690146641774}] 2023-07-23 21:10:49,998 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=30, state=RUNNABLE; OpenRegionProcedure e4a591b15fcf41c839cb213d14daf536, server=jenkins-hbase4.apache.org,42727,1690146641774}] 2023-07-23 21:10:50,000 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=6cf01962c34d31abe83bc5c26e1f54f4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:50,000 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650000"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650000"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650000"}]},"ts":"1690146650000"} 2023-07-23 21:10:50,000 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=5c4b2526340ace3ba5d6e7aeab20f20c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:50,001 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650000"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650000"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650000"}]},"ts":"1690146650000"} 2023-07-23 21:10:50,003 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=26, state=RUNNABLE; OpenRegionProcedure 6cf01962c34d31abe83bc5c26e1f54f4, server=jenkins-hbase4.apache.org,42727,1690146641774}] 2023-07-23 21:10:50,003 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=84264fc15b9b146b3a3191af3f7589a0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:50,003 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650003"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650003"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650003"}]},"ts":"1690146650003"} 2023-07-23 21:10:50,004 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=33, state=RUNNABLE; OpenRegionProcedure 5c4b2526340ace3ba5d6e7aeab20f20c, server=jenkins-hbase4.apache.org,42335,1690146647320}] 2023-07-23 21:10:50,006 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=28, state=RUNNABLE; OpenRegionProcedure 84264fc15b9b146b3a3191af3f7589a0, server=jenkins-hbase4.apache.org,42335,1690146647320}] 2023-07-23 21:10:50,065 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-23 21:10:50,066 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-23 21:10:50,141 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:50,141 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:50,143 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55706, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:50,150 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. 2023-07-23 21:10:50,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 19231f9db179525f2bc140ae04139a99, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'} 2023-07-23 21:10:50,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:50,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:50,150 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. 2023-07-23 21:10:50,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:50,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a930743917a64f683bb3541e65b4bbee, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'} 2023-07-23 21:10:50,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:50,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:50,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:50,156 INFO [StoreOpener-a930743917a64f683bb3541e65b4bbee-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:50,159 DEBUG [StoreOpener-a930743917a64f683bb3541e65b4bbee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee/f 2023-07-23 21:10:50,159 DEBUG [StoreOpener-a930743917a64f683bb3541e65b4bbee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee/f 2023-07-23 21:10:50,159 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. 2023-07-23 21:10:50,159 INFO [StoreOpener-a930743917a64f683bb3541e65b4bbee-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a930743917a64f683bb3541e65b4bbee columnFamilyName f 2023-07-23 21:10:50,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6cf01962c34d31abe83bc5c26e1f54f4, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('} 2023-07-23 21:10:50,161 INFO [StoreOpener-19231f9db179525f2bc140ae04139a99-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:50,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:50,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:50,165 INFO [StoreOpener-a930743917a64f683bb3541e65b4bbee-1] regionserver.HStore(310): Store=a930743917a64f683bb3541e65b4bbee/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:50,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:50,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:50,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:50,167 DEBUG [StoreOpener-19231f9db179525f2bc140ae04139a99-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99/f 2023-07-23 21:10:50,167 DEBUG [StoreOpener-19231f9db179525f2bc140ae04139a99-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99/f 2023-07-23 21:10:50,167 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. 2023-07-23 21:10:50,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5c4b2526340ace3ba5d6e7aeab20f20c, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'} 2023-07-23 21:10:50,168 INFO [StoreOpener-19231f9db179525f2bc140ae04139a99-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 19231f9db179525f2bc140ae04139a99 columnFamilyName f 2023-07-23 21:10:50,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:50,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:50,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:50,168 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 21:10:50,169 INFO [StoreOpener-19231f9db179525f2bc140ae04139a99-1] regionserver.HStore(310): Store=19231f9db179525f2bc140ae04139a99/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:50,169 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-23 21:10:50,169 INFO [StoreOpener-6cf01962c34d31abe83bc5c26e1f54f4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:50,169 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:10:50,170 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-23 21:10:50,170 INFO [StoreOpener-5c4b2526340ace3ba5d6e7aeab20f20c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:50,170 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 21:10:50,170 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-23 21:10:50,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:50,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:50,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:50,173 DEBUG [StoreOpener-5c4b2526340ace3ba5d6e7aeab20f20c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c/f 2023-07-23 21:10:50,173 DEBUG [StoreOpener-5c4b2526340ace3ba5d6e7aeab20f20c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c/f 2023-07-23 21:10:50,173 INFO [StoreOpener-5c4b2526340ace3ba5d6e7aeab20f20c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5c4b2526340ace3ba5d6e7aeab20f20c columnFamilyName f 2023-07-23 21:10:50,175 INFO [StoreOpener-5c4b2526340ace3ba5d6e7aeab20f20c-1] regionserver.HStore(310): Store=5c4b2526340ace3ba5d6e7aeab20f20c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:50,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:50,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:50,178 DEBUG [StoreOpener-6cf01962c34d31abe83bc5c26e1f54f4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4/f 2023-07-23 21:10:50,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:50,178 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a930743917a64f683bb3541e65b4bbee; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10099495360, jitterRate=-0.059411197900772095}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:50,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:50,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a930743917a64f683bb3541e65b4bbee: 2023-07-23 21:10:50,179 DEBUG [StoreOpener-6cf01962c34d31abe83bc5c26e1f54f4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4/f 2023-07-23 21:10:50,180 INFO [StoreOpener-6cf01962c34d31abe83bc5c26e1f54f4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6cf01962c34d31abe83bc5c26e1f54f4 columnFamilyName f 2023-07-23 21:10:50,181 INFO [StoreOpener-6cf01962c34d31abe83bc5c26e1f54f4-1] regionserver.HStore(310): Store=6cf01962c34d31abe83bc5c26e1f54f4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:50,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:50,191 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee., pid=36, masterSystemTime=1690146650141 2023-07-23 21:10:50,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:50,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:50,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:50,196 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 19231f9db179525f2bc140ae04139a99; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11995858880, jitterRate=0.11720141768455505}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:50,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 19231f9db179525f2bc140ae04139a99: 2023-07-23 21:10:50,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. 2023-07-23 21:10:50,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. 2023-07-23 21:10:50,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. 2023-07-23 21:10:50,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 134135b7471ca4f427a304c701ae4217, NAME => 'Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'} 2023-07-23 21:10:50,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:50,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:50,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:50,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:50,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:50,200 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99., pid=35, masterSystemTime=1690146650141 2023-07-23 21:10:50,200 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5c4b2526340ace3ba5d6e7aeab20f20c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9795510720, jitterRate=-0.08772197365760803}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:50,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5c4b2526340ace3ba5d6e7aeab20f20c: 2023-07-23 21:10:50,200 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=a930743917a64f683bb3541e65b4bbee, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:50,200 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650200"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146650200"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146650200"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146650200"}]},"ts":"1690146650200"} 2023-07-23 21:10:50,202 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c., pid=43, masterSystemTime=1690146650158 2023-07-23 21:10:50,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. 2023-07-23 21:10:50,202 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. 2023-07-23 21:10:50,202 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. 2023-07-23 21:10:50,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4362896728e8f23b0010c41e1f288c84, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'} 2023-07-23 21:10:50,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:50,204 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=19231f9db179525f2bc140ae04139a99, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:50,204 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690146649237.19231f9db179525f2bc140ae04139a99.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650203"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146650203"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146650203"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146650203"}]},"ts":"1690146650203"} 2023-07-23 21:10:50,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:50,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,204 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6cf01962c34d31abe83bc5c26e1f54f4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9992320640, jitterRate=-0.0693926215171814}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:50,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:50,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6cf01962c34d31abe83bc5c26e1f54f4: 2023-07-23 21:10:50,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:50,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. 2023-07-23 21:10:50,205 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. 2023-07-23 21:10:50,205 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. 2023-07-23 21:10:50,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 84264fc15b9b146b3a3191af3f7589a0, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'} 2023-07-23 21:10:50,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:50,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:50,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:50,206 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4., pid=42, masterSystemTime=1690146650154 2023-07-23 21:10:50,207 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=5c4b2526340ace3ba5d6e7aeab20f20c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:50,207 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650207"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146650207"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146650207"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146650207"}]},"ts":"1690146650207"} 2023-07-23 21:10:50,208 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=32 2023-07-23 21:10:50,208 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=32, state=SUCCESS; OpenRegionProcedure a930743917a64f683bb3541e65b4bbee, server=jenkins-hbase4.apache.org,46485,1690146642211 in 216 msec 2023-07-23 21:10:50,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. 2023-07-23 21:10:50,209 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. 2023-07-23 21:10:50,209 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. 2023-07-23 21:10:50,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 545ce0982cad2c351e7e32ca135e6c68, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''} 2023-07-23 21:10:50,209 INFO [StoreOpener-134135b7471ca4f427a304c701ae4217-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:50,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:50,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:50,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:50,210 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=6cf01962c34d31abe83bc5c26e1f54f4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:50,210 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650210"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146650210"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146650210"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146650210"}]},"ts":"1690146650210"} 2023-07-23 21:10:50,210 INFO [StoreOpener-4362896728e8f23b0010c41e1f288c84-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:50,211 INFO [StoreOpener-84264fc15b9b146b3a3191af3f7589a0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:50,213 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=27 2023-07-23 21:10:50,213 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=27, state=SUCCESS; OpenRegionProcedure 19231f9db179525f2bc140ae04139a99, server=jenkins-hbase4.apache.org,45637,1690146645550 in 221 msec 2023-07-23 21:10:50,213 DEBUG [StoreOpener-4362896728e8f23b0010c41e1f288c84-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84/f 2023-07-23 21:10:50,213 DEBUG [StoreOpener-4362896728e8f23b0010c41e1f288c84-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84/f 2023-07-23 21:10:50,214 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a930743917a64f683bb3541e65b4bbee, ASSIGN in 388 msec 2023-07-23 21:10:50,215 DEBUG [StoreOpener-134135b7471ca4f427a304c701ae4217-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217/f 2023-07-23 21:10:50,215 DEBUG [StoreOpener-84264fc15b9b146b3a3191af3f7589a0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0/f 2023-07-23 21:10:50,216 DEBUG [StoreOpener-84264fc15b9b146b3a3191af3f7589a0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0/f 2023-07-23 21:10:50,215 DEBUG [StoreOpener-134135b7471ca4f427a304c701ae4217-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217/f 2023-07-23 21:10:50,216 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=33 2023-07-23 21:10:50,216 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=33, state=SUCCESS; OpenRegionProcedure 5c4b2526340ace3ba5d6e7aeab20f20c, server=jenkins-hbase4.apache.org,42335,1690146647320 in 207 msec 2023-07-23 21:10:50,216 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=19231f9db179525f2bc140ae04139a99, ASSIGN in 392 msec 2023-07-23 21:10:50,217 INFO [StoreOpener-84264fc15b9b146b3a3191af3f7589a0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 84264fc15b9b146b3a3191af3f7589a0 columnFamilyName f 2023-07-23 21:10:50,217 INFO [StoreOpener-4362896728e8f23b0010c41e1f288c84-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4362896728e8f23b0010c41e1f288c84 columnFamilyName f 2023-07-23 21:10:50,217 INFO [StoreOpener-134135b7471ca4f427a304c701ae4217-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 134135b7471ca4f427a304c701ae4217 columnFamilyName f 2023-07-23 21:10:50,218 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=26 2023-07-23 21:10:50,218 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=26, state=SUCCESS; OpenRegionProcedure 6cf01962c34d31abe83bc5c26e1f54f4, server=jenkins-hbase4.apache.org,42727,1690146641774 in 210 msec 2023-07-23 21:10:50,219 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c4b2526340ace3ba5d6e7aeab20f20c, ASSIGN in 395 msec 2023-07-23 21:10:50,219 INFO [StoreOpener-134135b7471ca4f427a304c701ae4217-1] regionserver.HStore(310): Store=134135b7471ca4f427a304c701ae4217/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:50,220 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=6cf01962c34d31abe83bc5c26e1f54f4, ASSIGN in 397 msec 2023-07-23 21:10:50,221 INFO [StoreOpener-4362896728e8f23b0010c41e1f288c84-1] regionserver.HStore(310): Store=4362896728e8f23b0010c41e1f288c84/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:50,222 INFO [StoreOpener-545ce0982cad2c351e7e32ca135e6c68-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:50,222 INFO [StoreOpener-84264fc15b9b146b3a3191af3f7589a0-1] regionserver.HStore(310): Store=84264fc15b9b146b3a3191af3f7589a0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:50,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:50,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:50,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:50,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:50,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:50,225 DEBUG [StoreOpener-545ce0982cad2c351e7e32ca135e6c68-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68/f 2023-07-23 21:10:50,225 DEBUG [StoreOpener-545ce0982cad2c351e7e32ca135e6c68-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68/f 2023-07-23 21:10:50,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:50,226 INFO [StoreOpener-545ce0982cad2c351e7e32ca135e6c68-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 545ce0982cad2c351e7e32ca135e6c68 columnFamilyName f 2023-07-23 21:10:50,227 INFO [StoreOpener-545ce0982cad2c351e7e32ca135e6c68-1] regionserver.HStore(310): Store=545ce0982cad2c351e7e32ca135e6c68/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:50,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:50,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:50,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:50,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:50,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:50,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:50,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:50,235 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 134135b7471ca4f427a304c701ae4217; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9662828480, jitterRate=-0.10007897019386292}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:50,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 134135b7471ca4f427a304c701ae4217: 2023-07-23 21:10:50,235 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4362896728e8f23b0010c41e1f288c84; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9805099360, jitterRate=-0.08682896196842194}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:50,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4362896728e8f23b0010c41e1f288c84: 2023-07-23 21:10:50,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:50,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217., pid=39, masterSystemTime=1690146650141 2023-07-23 21:10:50,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84., pid=38, masterSystemTime=1690146650141 2023-07-23 21:10:50,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:50,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. 2023-07-23 21:10:50,239 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. 2023-07-23 21:10:50,239 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 84264fc15b9b146b3a3191af3f7589a0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10799982880, jitterRate=0.005826786160469055}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:50,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 84264fc15b9b146b3a3191af3f7589a0: 2023-07-23 21:10:50,240 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0., pid=44, masterSystemTime=1690146650158 2023-07-23 21:10:50,240 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=134135b7471ca4f427a304c701ae4217, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:50,240 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146650240"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146650240"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146650240"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146650240"}]},"ts":"1690146650240"} 2023-07-23 21:10:50,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. 2023-07-23 21:10:50,242 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. 2023-07-23 21:10:50,242 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. 2023-07-23 21:10:50,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0533d1e24e45fc02629db77ac654984b, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'} 2023-07-23 21:10:50,243 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=4362896728e8f23b0010c41e1f288c84, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:50,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:50,243 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650243"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146650243"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146650243"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146650243"}]},"ts":"1690146650243"} 2023-07-23 21:10:50,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:50,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:50,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. 2023-07-23 21:10:50,244 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. 2023-07-23 21:10:50,246 INFO [StoreOpener-0533d1e24e45fc02629db77ac654984b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:50,246 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=84264fc15b9b146b3a3191af3f7589a0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:50,246 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650246"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146650246"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146650246"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146650246"}]},"ts":"1690146650246"} 2023-07-23 21:10:50,248 DEBUG [StoreOpener-0533d1e24e45fc02629db77ac654984b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b/f 2023-07-23 21:10:50,249 DEBUG [StoreOpener-0533d1e24e45fc02629db77ac654984b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b/f 2023-07-23 21:10:50,249 INFO [StoreOpener-0533d1e24e45fc02629db77ac654984b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0533d1e24e45fc02629db77ac654984b columnFamilyName f 2023-07-23 21:10:50,250 INFO [StoreOpener-0533d1e24e45fc02629db77ac654984b-1] regionserver.HStore(310): Store=0533d1e24e45fc02629db77ac654984b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:50,251 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=25 2023-07-23 21:10:50,251 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=25, state=SUCCESS; OpenRegionProcedure 134135b7471ca4f427a304c701ae4217, server=jenkins-hbase4.apache.org,46485,1690146642211 in 253 msec 2023-07-23 21:10:50,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:50,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:50,254 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=31 2023-07-23 21:10:50,255 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=31, state=SUCCESS; OpenRegionProcedure 4362896728e8f23b0010c41e1f288c84, server=jenkins-hbase4.apache.org,45637,1690146645550 in 257 msec 2023-07-23 21:10:50,256 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=134135b7471ca4f427a304c701ae4217, ASSIGN in 430 msec 2023-07-23 21:10:50,257 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=28 2023-07-23 21:10:50,258 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=28, state=SUCCESS; OpenRegionProcedure 84264fc15b9b146b3a3191af3f7589a0, server=jenkins-hbase4.apache.org,42335,1690146647320 in 245 msec 2023-07-23 21:10:50,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:50,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:50,259 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 545ce0982cad2c351e7e32ca135e6c68; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11197011680, jitterRate=0.042802974581718445}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:50,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 545ce0982cad2c351e7e32ca135e6c68: 2023-07-23 21:10:50,259 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4362896728e8f23b0010c41e1f288c84, ASSIGN in 434 msec 2023-07-23 21:10:50,260 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=84264fc15b9b146b3a3191af3f7589a0, ASSIGN in 437 msec 2023-07-23 21:10:50,261 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68., pid=40, masterSystemTime=1690146650154 2023-07-23 21:10:50,263 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. 2023-07-23 21:10:50,263 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. 2023-07-23 21:10:50,263 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. 2023-07-23 21:10:50,263 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e4a591b15fcf41c839cb213d14daf536, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'} 2023-07-23 21:10:50,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:50,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:50,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:50,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:50,267 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0533d1e24e45fc02629db77ac654984b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10960817120, jitterRate=0.020805642008781433}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:50,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0533d1e24e45fc02629db77ac654984b: 2023-07-23 21:10:50,269 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=545ce0982cad2c351e7e32ca135e6c68, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:50,269 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146650269"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146650269"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146650269"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146650269"}]},"ts":"1690146650269"} 2023-07-23 21:10:50,269 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b., pid=37, masterSystemTime=1690146650141 2023-07-23 21:10:50,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. 2023-07-23 21:10:50,272 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. 2023-07-23 21:10:50,273 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=0533d1e24e45fc02629db77ac654984b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:50,273 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650273"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146650273"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146650273"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146650273"}]},"ts":"1690146650273"} 2023-07-23 21:10:50,276 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=34 2023-07-23 21:10:50,277 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=34, state=SUCCESS; OpenRegionProcedure 545ce0982cad2c351e7e32ca135e6c68, server=jenkins-hbase4.apache.org,42727,1690146641774 in 275 msec 2023-07-23 21:10:50,279 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=545ce0982cad2c351e7e32ca135e6c68, ASSIGN in 455 msec 2023-07-23 21:10:50,279 INFO [StoreOpener-e4a591b15fcf41c839cb213d14daf536-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:50,279 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=29 2023-07-23 21:10:50,279 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=29, state=SUCCESS; OpenRegionProcedure 0533d1e24e45fc02629db77ac654984b, server=jenkins-hbase4.apache.org,45637,1690146645550 in 286 msec 2023-07-23 21:10:50,281 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=0533d1e24e45fc02629db77ac654984b, ASSIGN in 458 msec 2023-07-23 21:10:50,282 DEBUG [StoreOpener-e4a591b15fcf41c839cb213d14daf536-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536/f 2023-07-23 21:10:50,282 DEBUG [StoreOpener-e4a591b15fcf41c839cb213d14daf536-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536/f 2023-07-23 21:10:50,282 INFO [StoreOpener-e4a591b15fcf41c839cb213d14daf536-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e4a591b15fcf41c839cb213d14daf536 columnFamilyName f 2023-07-23 21:10:50,283 INFO [StoreOpener-e4a591b15fcf41c839cb213d14daf536-1] regionserver.HStore(310): Store=e4a591b15fcf41c839cb213d14daf536/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:50,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:50,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:50,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:50,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:50,292 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e4a591b15fcf41c839cb213d14daf536; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9946585920, jitterRate=-0.0736519992351532}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:50,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e4a591b15fcf41c839cb213d14daf536: 2023-07-23 21:10:50,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536., pid=41, masterSystemTime=1690146650154 2023-07-23 21:10:50,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. 2023-07-23 21:10:50,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. 2023-07-23 21:10:50,296 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=e4a591b15fcf41c839cb213d14daf536, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:50,296 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650296"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146650296"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146650296"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146650296"}]},"ts":"1690146650296"} 2023-07-23 21:10:50,301 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=30 2023-07-23 21:10:50,301 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=30, state=SUCCESS; OpenRegionProcedure e4a591b15fcf41c839cb213d14daf536, server=jenkins-hbase4.apache.org,42727,1690146641774 in 300 msec 2023-07-23 21:10:50,304 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=24 2023-07-23 21:10:50,305 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e4a591b15fcf41c839cb213d14daf536, ASSIGN in 480 msec 2023-07-23 21:10:50,306 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:50,306 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146650306"}]},"ts":"1690146650306"} 2023-07-23 21:10:50,308 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLED in hbase:meta 2023-07-23 21:10:50,311 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:50,315 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion in 1.0730 sec 2023-07-23 21:10:50,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-23 21:10:50,355 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateMultiRegion, procId: 24 completed 2023-07-23 21:10:50,356 DEBUG [Listener at localhost/38995] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateMultiRegion get assigned. Timeout = 60000ms 2023-07-23 21:10:50,357 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:50,366 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateMultiRegion assigned to meta. Checking AM states. 2023-07-23 21:10:50,367 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:50,367 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateMultiRegion assigned. 2023-07-23 21:10:50,370 INFO [Listener at localhost/38995] client.HBaseAdmin$15(890): Started disable of Group_testCreateMultiRegion 2023-07-23 21:10:50,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCreateMultiRegion 2023-07-23 21:10:50,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=45, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:10:50,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=45 2023-07-23 21:10:50,376 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146650376"}]},"ts":"1690146650376"} 2023-07-23 21:10:50,378 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLING in hbase:meta 2023-07-23 21:10:50,381 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testCreateMultiRegion to state=DISABLING 2023-07-23 21:10:50,386 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=6cf01962c34d31abe83bc5c26e1f54f4, UNASSIGN}, {pid=47, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=19231f9db179525f2bc140ae04139a99, UNASSIGN}, {pid=48, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=84264fc15b9b146b3a3191af3f7589a0, UNASSIGN}, {pid=49, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=0533d1e24e45fc02629db77ac654984b, UNASSIGN}, {pid=50, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e4a591b15fcf41c839cb213d14daf536, UNASSIGN}, {pid=51, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4362896728e8f23b0010c41e1f288c84, UNASSIGN}, {pid=52, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a930743917a64f683bb3541e65b4bbee, UNASSIGN}, {pid=53, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c4b2526340ace3ba5d6e7aeab20f20c, UNASSIGN}, {pid=54, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=545ce0982cad2c351e7e32ca135e6c68, UNASSIGN}, {pid=55, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=134135b7471ca4f427a304c701ae4217, UNASSIGN}] 2023-07-23 21:10:50,388 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=134135b7471ca4f427a304c701ae4217, UNASSIGN 2023-07-23 21:10:50,389 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c4b2526340ace3ba5d6e7aeab20f20c, UNASSIGN 2023-07-23 21:10:50,389 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=545ce0982cad2c351e7e32ca135e6c68, UNASSIGN 2023-07-23 21:10:50,391 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a930743917a64f683bb3541e65b4bbee, UNASSIGN 2023-07-23 21:10:50,391 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4362896728e8f23b0010c41e1f288c84, UNASSIGN 2023-07-23 21:10:50,393 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=134135b7471ca4f427a304c701ae4217, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:50,393 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=4362896728e8f23b0010c41e1f288c84, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:50,393 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146650393"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650393"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650393"}]},"ts":"1690146650393"} 2023-07-23 21:10:50,393 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650393"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650393"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650393"}]},"ts":"1690146650393"} 2023-07-23 21:10:50,394 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=5c4b2526340ace3ba5d6e7aeab20f20c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:50,394 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650394"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650394"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650394"}]},"ts":"1690146650394"} 2023-07-23 21:10:50,394 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=545ce0982cad2c351e7e32ca135e6c68, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:50,394 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146650394"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650394"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650394"}]},"ts":"1690146650394"} 2023-07-23 21:10:50,394 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=a930743917a64f683bb3541e65b4bbee, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:50,395 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650394"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650394"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650394"}]},"ts":"1690146650394"} 2023-07-23 21:10:50,397 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE; CloseRegionProcedure 134135b7471ca4f427a304c701ae4217, server=jenkins-hbase4.apache.org,46485,1690146642211}] 2023-07-23 21:10:50,398 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=51, state=RUNNABLE; CloseRegionProcedure 4362896728e8f23b0010c41e1f288c84, server=jenkins-hbase4.apache.org,45637,1690146645550}] 2023-07-23 21:10:50,401 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=53, state=RUNNABLE; CloseRegionProcedure 5c4b2526340ace3ba5d6e7aeab20f20c, server=jenkins-hbase4.apache.org,42335,1690146647320}] 2023-07-23 21:10:50,403 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=54, state=RUNNABLE; CloseRegionProcedure 545ce0982cad2c351e7e32ca135e6c68, server=jenkins-hbase4.apache.org,42727,1690146641774}] 2023-07-23 21:10:50,404 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=52, state=RUNNABLE; CloseRegionProcedure a930743917a64f683bb3541e65b4bbee, server=jenkins-hbase4.apache.org,46485,1690146642211}] 2023-07-23 21:10:50,407 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e4a591b15fcf41c839cb213d14daf536, UNASSIGN 2023-07-23 21:10:50,408 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=0533d1e24e45fc02629db77ac654984b, UNASSIGN 2023-07-23 21:10:50,408 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=84264fc15b9b146b3a3191af3f7589a0, UNASSIGN 2023-07-23 21:10:50,410 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=e4a591b15fcf41c839cb213d14daf536, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:50,410 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650409"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650409"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650409"}]},"ts":"1690146650409"} 2023-07-23 21:10:50,410 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=0533d1e24e45fc02629db77ac654984b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:50,410 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650410"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650410"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650410"}]},"ts":"1690146650410"} 2023-07-23 21:10:50,411 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=84264fc15b9b146b3a3191af3f7589a0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:50,411 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650411"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650411"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650411"}]},"ts":"1690146650411"} 2023-07-23 21:10:50,413 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=19231f9db179525f2bc140ae04139a99, UNASSIGN 2023-07-23 21:10:50,413 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=6cf01962c34d31abe83bc5c26e1f54f4, UNASSIGN 2023-07-23 21:10:50,414 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=19231f9db179525f2bc140ae04139a99, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:50,414 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690146649237.19231f9db179525f2bc140ae04139a99.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650414"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650414"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650414"}]},"ts":"1690146650414"} 2023-07-23 21:10:50,415 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=50, state=RUNNABLE; CloseRegionProcedure e4a591b15fcf41c839cb213d14daf536, server=jenkins-hbase4.apache.org,42727,1690146641774}] 2023-07-23 21:10:50,415 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=6cf01962c34d31abe83bc5c26e1f54f4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:50,415 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650415"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650415"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650415"}]},"ts":"1690146650415"} 2023-07-23 21:10:50,417 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=49, state=RUNNABLE; CloseRegionProcedure 0533d1e24e45fc02629db77ac654984b, server=jenkins-hbase4.apache.org,45637,1690146645550}] 2023-07-23 21:10:50,421 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=48, state=RUNNABLE; CloseRegionProcedure 84264fc15b9b146b3a3191af3f7589a0, server=jenkins-hbase4.apache.org,42335,1690146647320}] 2023-07-23 21:10:50,423 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=47, state=RUNNABLE; CloseRegionProcedure 19231f9db179525f2bc140ae04139a99, server=jenkins-hbase4.apache.org,45637,1690146645550}] 2023-07-23 21:10:50,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=46, state=RUNNABLE; CloseRegionProcedure 6cf01962c34d31abe83bc5c26e1f54f4, server=jenkins-hbase4.apache.org,42727,1690146641774}] 2023-07-23 21:10:50,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=45 2023-07-23 21:10:50,553 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:50,555 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 134135b7471ca4f427a304c701ae4217, disabling compactions & flushes 2023-07-23 21:10:50,555 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:50,555 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:50,555 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. 2023-07-23 21:10:50,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 84264fc15b9b146b3a3191af3f7589a0, disabling compactions & flushes 2023-07-23 21:10:50,556 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 19231f9db179525f2bc140ae04139a99, disabling compactions & flushes 2023-07-23 21:10:50,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. 2023-07-23 21:10:50,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. 2023-07-23 21:10:50,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. 2023-07-23 21:10:50,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. 2023-07-23 21:10:50,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. 2023-07-23 21:10:50,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. after waiting 0 ms 2023-07-23 21:10:50,558 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. after waiting 0 ms 2023-07-23 21:10:50,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. after waiting 0 ms 2023-07-23 21:10:50,558 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. 2023-07-23 21:10:50,558 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. 2023-07-23 21:10:50,558 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. 2023-07-23 21:10:50,558 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:50,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e4a591b15fcf41c839cb213d14daf536, disabling compactions & flushes 2023-07-23 21:10:50,559 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. 2023-07-23 21:10:50,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. 2023-07-23 21:10:50,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. after waiting 0 ms 2023-07-23 21:10:50,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. 2023-07-23 21:10:50,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:50,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0. 2023-07-23 21:10:50,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 84264fc15b9b146b3a3191af3f7589a0: 2023-07-23 21:10:50,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:50,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:50,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:50,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99. 2023-07-23 21:10:50,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 19231f9db179525f2bc140ae04139a99: 2023-07-23 21:10:50,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5c4b2526340ace3ba5d6e7aeab20f20c, disabling compactions & flushes 2023-07-23 21:10:50,588 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. 2023-07-23 21:10:50,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. 2023-07-23 21:10:50,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. after waiting 0 ms 2023-07-23 21:10:50,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. 2023-07-23 21:10:50,588 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=84264fc15b9b146b3a3191af3f7589a0, regionState=CLOSED 2023-07-23 21:10:50,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:50,588 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650588"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650588"}]},"ts":"1690146650588"} 2023-07-23 21:10:50,589 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217. 2023-07-23 21:10:50,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 134135b7471ca4f427a304c701ae4217: 2023-07-23 21:10:50,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:50,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:50,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4362896728e8f23b0010c41e1f288c84, disabling compactions & flushes 2023-07-23 21:10:50,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. 2023-07-23 21:10:50,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. 2023-07-23 21:10:50,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. after waiting 0 ms 2023-07-23 21:10:50,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. 2023-07-23 21:10:50,595 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=19231f9db179525f2bc140ae04139a99, regionState=CLOSED 2023-07-23 21:10:50,595 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690146649237.19231f9db179525f2bc140ae04139a99.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650595"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650595"}]},"ts":"1690146650595"} 2023-07-23 21:10:50,598 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=134135b7471ca4f427a304c701ae4217, regionState=CLOSED 2023-07-23 21:10:50,598 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:50,598 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146650597"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650597"}]},"ts":"1690146650597"} 2023-07-23 21:10:50,599 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=48 2023-07-23 21:10:50,599 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=48, state=SUCCESS; CloseRegionProcedure 84264fc15b9b146b3a3191af3f7589a0, server=jenkins-hbase4.apache.org,42335,1690146647320 in 170 msec 2023-07-23 21:10:50,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536. 2023-07-23 21:10:50,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e4a591b15fcf41c839cb213d14daf536: 2023-07-23 21:10:50,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:50,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:50,602 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=47 2023-07-23 21:10:50,602 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=84264fc15b9b146b3a3191af3f7589a0, UNASSIGN in 217 msec 2023-07-23 21:10:50,602 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=47, state=SUCCESS; CloseRegionProcedure 19231f9db179525f2bc140ae04139a99, server=jenkins-hbase4.apache.org,45637,1690146645550 in 175 msec 2023-07-23 21:10:50,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:50,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:50,604 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=e4a591b15fcf41c839cb213d14daf536, regionState=CLOSED 2023-07-23 21:10:50,604 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650604"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650604"}]},"ts":"1690146650604"} 2023-07-23 21:10:50,605 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=55 2023-07-23 21:10:50,605 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=19231f9db179525f2bc140ae04139a99, UNASSIGN in 220 msec 2023-07-23 21:10:50,605 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; CloseRegionProcedure 134135b7471ca4f427a304c701ae4217, server=jenkins-hbase4.apache.org,46485,1690146642211 in 204 msec 2023-07-23 21:10:50,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a930743917a64f683bb3541e65b4bbee, disabling compactions & flushes 2023-07-23 21:10:50,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. 2023-07-23 21:10:50,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. 2023-07-23 21:10:50,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. after waiting 0 ms 2023-07-23 21:10:50,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. 2023-07-23 21:10:50,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6cf01962c34d31abe83bc5c26e1f54f4, disabling compactions & flushes 2023-07-23 21:10:50,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:50,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. 2023-07-23 21:10:50,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. 2023-07-23 21:10:50,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. after waiting 0 ms 2023-07-23 21:10:50,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. 2023-07-23 21:10:50,614 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=134135b7471ca4f427a304c701ae4217, UNASSIGN in 220 msec 2023-07-23 21:10:50,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:50,615 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c. 2023-07-23 21:10:50,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5c4b2526340ace3ba5d6e7aeab20f20c: 2023-07-23 21:10:50,616 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84. 2023-07-23 21:10:50,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4362896728e8f23b0010c41e1f288c84: 2023-07-23 21:10:50,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:50,620 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=50 2023-07-23 21:10:50,620 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=5c4b2526340ace3ba5d6e7aeab20f20c, regionState=CLOSED 2023-07-23 21:10:50,620 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=50, state=SUCCESS; CloseRegionProcedure e4a591b15fcf41c839cb213d14daf536, server=jenkins-hbase4.apache.org,42727,1690146641774 in 191 msec 2023-07-23 21:10:50,620 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650620"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650620"}]},"ts":"1690146650620"} 2023-07-23 21:10:50,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:50,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:50,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0533d1e24e45fc02629db77ac654984b, disabling compactions & flushes 2023-07-23 21:10:50,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. 2023-07-23 21:10:50,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. 2023-07-23 21:10:50,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. after waiting 0 ms 2023-07-23 21:10:50,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. 2023-07-23 21:10:50,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:50,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4. 2023-07-23 21:10:50,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6cf01962c34d31abe83bc5c26e1f54f4: 2023-07-23 21:10:50,635 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=4362896728e8f23b0010c41e1f288c84, regionState=CLOSED 2023-07-23 21:10:50,635 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650635"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650635"}]},"ts":"1690146650635"} 2023-07-23 21:10:50,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:50,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee. 2023-07-23 21:10:50,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a930743917a64f683bb3541e65b4bbee: 2023-07-23 21:10:50,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:50,640 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:50,640 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e4a591b15fcf41c839cb213d14daf536, UNASSIGN in 235 msec 2023-07-23 21:10:50,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 545ce0982cad2c351e7e32ca135e6c68, disabling compactions & flushes 2023-07-23 21:10:50,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. 2023-07-23 21:10:50,646 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=6cf01962c34d31abe83bc5c26e1f54f4, regionState=CLOSED 2023-07-23 21:10:50,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:50,647 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650646"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650646"}]},"ts":"1690146650646"} 2023-07-23 21:10:50,647 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=a930743917a64f683bb3541e65b4bbee, regionState=CLOSED 2023-07-23 21:10:50,648 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650647"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650647"}]},"ts":"1690146650647"} 2023-07-23 21:10:50,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. 2023-07-23 21:10:50,648 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=53 2023-07-23 21:10:50,649 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=53, state=SUCCESS; CloseRegionProcedure 5c4b2526340ace3ba5d6e7aeab20f20c, server=jenkins-hbase4.apache.org,42335,1690146647320 in 238 msec 2023-07-23 21:10:50,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. after waiting 0 ms 2023-07-23 21:10:50,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. 2023-07-23 21:10:50,651 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=51 2023-07-23 21:10:50,651 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=51, state=SUCCESS; CloseRegionProcedure 4362896728e8f23b0010c41e1f288c84, server=jenkins-hbase4.apache.org,45637,1690146645550 in 247 msec 2023-07-23 21:10:50,653 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c4b2526340ace3ba5d6e7aeab20f20c, UNASSIGN in 263 msec 2023-07-23 21:10:50,654 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4362896728e8f23b0010c41e1f288c84, UNASSIGN in 267 msec 2023-07-23 21:10:50,655 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=46 2023-07-23 21:10:50,655 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=46, state=SUCCESS; CloseRegionProcedure 6cf01962c34d31abe83bc5c26e1f54f4, server=jenkins-hbase4.apache.org,42727,1690146641774 in 225 msec 2023-07-23 21:10:50,656 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:50,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b. 2023-07-23 21:10:50,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0533d1e24e45fc02629db77ac654984b: 2023-07-23 21:10:50,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:50,660 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=52 2023-07-23 21:10:50,660 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=6cf01962c34d31abe83bc5c26e1f54f4, UNASSIGN in 272 msec 2023-07-23 21:10:50,660 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=52, state=SUCCESS; CloseRegionProcedure a930743917a64f683bb3541e65b4bbee, server=jenkins-hbase4.apache.org,46485,1690146642211 in 248 msec 2023-07-23 21:10:50,661 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=0533d1e24e45fc02629db77ac654984b, regionState=CLOSED 2023-07-23 21:10:50,661 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650661"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650661"}]},"ts":"1690146650661"} 2023-07-23 21:10:50,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:50,664 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a930743917a64f683bb3541e65b4bbee, UNASSIGN in 275 msec 2023-07-23 21:10:50,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68. 2023-07-23 21:10:50,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 545ce0982cad2c351e7e32ca135e6c68: 2023-07-23 21:10:50,667 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=49 2023-07-23 21:10:50,667 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=49, state=SUCCESS; CloseRegionProcedure 0533d1e24e45fc02629db77ac654984b, server=jenkins-hbase4.apache.org,45637,1690146645550 in 246 msec 2023-07-23 21:10:50,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:50,669 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=545ce0982cad2c351e7e32ca135e6c68, regionState=CLOSED 2023-07-23 21:10:50,669 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146650669"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650669"}]},"ts":"1690146650669"} 2023-07-23 21:10:50,671 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=0533d1e24e45fc02629db77ac654984b, UNASSIGN in 284 msec 2023-07-23 21:10:50,673 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=54 2023-07-23 21:10:50,673 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=54, state=SUCCESS; CloseRegionProcedure 545ce0982cad2c351e7e32ca135e6c68, server=jenkins-hbase4.apache.org,42727,1690146641774 in 268 msec 2023-07-23 21:10:50,675 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=45 2023-07-23 21:10:50,676 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=545ce0982cad2c351e7e32ca135e6c68, UNASSIGN in 288 msec 2023-07-23 21:10:50,677 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146650676"}]},"ts":"1690146650676"} 2023-07-23 21:10:50,679 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLED in hbase:meta 2023-07-23 21:10:50,681 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testCreateMultiRegion to state=DISABLED 2023-07-23 21:10:50,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=45 2023-07-23 21:10:50,684 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=45, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion in 312 msec 2023-07-23 21:10:50,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=45 2023-07-23 21:10:50,984 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateMultiRegion, procId: 45 completed 2023-07-23 21:10:50,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCreateMultiRegion 2023-07-23 21:10:50,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:10:50,989 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=66, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:10:50,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateMultiRegion' from rsgroup 'default' 2023-07-23 21:10:50,991 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=66, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:10:50,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:50,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:50,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-23 21:10:51,006 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:51,006 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:51,006 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:51,006 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:51,006 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:51,006 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:51,006 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:51,006 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:51,012 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536/recovered.edits] 2023-07-23 21:10:51,012 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4/recovered.edits] 2023-07-23 21:10:51,012 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84/recovered.edits] 2023-07-23 21:10:51,014 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee/recovered.edits] 2023-07-23 21:10:51,014 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0/recovered.edits] 2023-07-23 21:10:51,014 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99/recovered.edits] 2023-07-23 21:10:51,015 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c/recovered.edits] 2023-07-23 21:10:51,015 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b/recovered.edits] 2023-07-23 21:10:51,049 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99/recovered.edits/4.seqid 2023-07-23 21:10:51,051 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/19231f9db179525f2bc140ae04139a99 2023-07-23 21:10:51,052 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:51,055 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0/recovered.edits/4.seqid 2023-07-23 21:10:51,056 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4/recovered.edits/4.seqid 2023-07-23 21:10:51,056 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee/recovered.edits/4.seqid 2023-07-23 21:10:51,056 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84/recovered.edits/4.seqid 2023-07-23 21:10:51,057 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68/recovered.edits] 2023-07-23 21:10:51,058 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/84264fc15b9b146b3a3191af3f7589a0 2023-07-23 21:10:51,058 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:51,058 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c/recovered.edits/4.seqid 2023-07-23 21:10:51,059 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/6cf01962c34d31abe83bc5c26e1f54f4 2023-07-23 21:10:51,059 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/4362896728e8f23b0010c41e1f288c84 2023-07-23 21:10:51,059 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b/recovered.edits/4.seqid 2023-07-23 21:10:51,060 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536/recovered.edits/4.seqid 2023-07-23 21:10:51,061 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/5c4b2526340ace3ba5d6e7aeab20f20c 2023-07-23 21:10:51,061 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/a930743917a64f683bb3541e65b4bbee 2023-07-23 21:10:51,062 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/0533d1e24e45fc02629db77ac654984b 2023-07-23 21:10:51,062 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/e4a591b15fcf41c839cb213d14daf536 2023-07-23 21:10:51,064 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217/recovered.edits] 2023-07-23 21:10:51,071 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68/recovered.edits/4.seqid 2023-07-23 21:10:51,072 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/545ce0982cad2c351e7e32ca135e6c68 2023-07-23 21:10:51,074 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217/recovered.edits/4.seqid 2023-07-23 21:10:51,075 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateMultiRegion/134135b7471ca4f427a304c701ae4217 2023-07-23 21:10:51,075 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-23 21:10:51,078 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=66, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:10:51,084 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 10 rows of Group_testCreateMultiRegion from hbase:meta 2023-07-23 21:10:51,089 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateMultiRegion' descriptor. 2023-07-23 21:10:51,091 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=66, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:10:51,091 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateMultiRegion' from region states. 2023-07-23 21:10:51,091 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651091"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690146649237.19231f9db179525f2bc140ae04139a99.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651091"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651091"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651091"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651091"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651091"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651091"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651091"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651091"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651091"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,095 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 10 regions from META 2023-07-23 21:10:51,096 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6cf01962c34d31abe83bc5c26e1f54f4, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690146649237.6cf01962c34d31abe83bc5c26e1f54f4.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, {ENCODED => 19231f9db179525f2bc140ae04139a99, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1690146649237.19231f9db179525f2bc140ae04139a99.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, {ENCODED => 84264fc15b9b146b3a3191af3f7589a0, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1690146649237.84264fc15b9b146b3a3191af3f7589a0.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, {ENCODED => 0533d1e24e45fc02629db77ac654984b, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1690146649237.0533d1e24e45fc02629db77ac654984b.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, {ENCODED => e4a591b15fcf41c839cb213d14daf536, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690146649237.e4a591b15fcf41c839cb213d14daf536.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, {ENCODED => 4362896728e8f23b0010c41e1f288c84, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690146649237.4362896728e8f23b0010c41e1f288c84.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, {ENCODED => a930743917a64f683bb3541e65b4bbee, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690146649237.a930743917a64f683bb3541e65b4bbee.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, {ENCODED => 5c4b2526340ace3ba5d6e7aeab20f20c, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690146649237.5c4b2526340ace3ba5d6e7aeab20f20c.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, {ENCODED => 545ce0982cad2c351e7e32ca135e6c68, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690146649237.545ce0982cad2c351e7e32ca135e6c68.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, {ENCODED => 134135b7471ca4f427a304c701ae4217, NAME => 'Group_testCreateMultiRegion,,1690146649237.134135b7471ca4f427a304c701ae4217.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}] 2023-07-23 21:10:51,096 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateMultiRegion' as deleted. 2023-07-23 21:10:51,096 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146651096"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-23 21:10:51,098 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateMultiRegion state from META 2023-07-23 21:10:51,101 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=66, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:10:51,102 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion in 116 msec 2023-07-23 21:10:51,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-23 21:10:51,299 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateMultiRegion, procId: 66 completed 2023-07-23 21:10:51,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:51,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:51,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:51,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:51,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:51,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:51,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:51,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:51,317 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:51,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:51,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:51,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:51,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:51,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:51,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:51,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 250 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147851331, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:51,333 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:51,335 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:51,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,336 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:51,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:51,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:51,355 INFO [Listener at localhost/38995] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=507 (was 496) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1984174199_17 at /127.0.0.1:34018 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x17f15d9f-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1206779787_17 at /127.0.0.1:44502 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1952713836_17 at /127.0.0.1:52830 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-993029114_17 at /127.0.0.1:34032 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=795 (was 759) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=479 (was 479), ProcessCount=173 (was 173), AvailableMemoryMB=8095 (was 8147) 2023-07-23 21:10:51,356 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-23 21:10:51,371 INFO [Listener at localhost/38995] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=507, OpenFileDescriptor=795, MaxFileDescriptor=60000, SystemLoadAverage=479, ProcessCount=173, AvailableMemoryMB=8094 2023-07-23 21:10:51,372 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-23 21:10:51,372 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(132): testNamespaceCreateAndAssign 2023-07-23 21:10:51,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:51,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:51,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:51,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:51,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:51,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:51,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:51,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:51,389 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:51,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:51,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:51,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:51,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:51,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:51,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:51,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 278 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147851405, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:51,406 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:51,407 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:51,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,409 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:51,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:51,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:51,410 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBasics(118): testNamespaceCreateAndAssign 2023-07-23 21:10:51,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:51,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:51,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup appInfo 2023-07-23 21:10:51,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:51,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-23 21:10:51,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:51,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:51,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42335] to rsgroup appInfo 2023-07-23 21:10:51,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:51,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-23 21:10:51,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:51,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 21:10:51,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42335,1690146647320] are moved back to default 2023-07-23 21:10:51,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-23 21:10:51,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:51,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=appInfo 2023-07-23 21:10:51,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:51,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'appInfo'} 2023-07-23 21:10:51,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=67, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-23 21:10:51,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=67 2023-07-23 21:10:51,473 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:51,476 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 12 msec 2023-07-23 21:10:51,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=67 2023-07-23 21:10:51,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:51,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=68, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:10:51,587 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=68, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:51,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "Group_foo" qualifier: "Group_testCreateAndAssign" procId is: 68 2023-07-23 21:10:51,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=68 2023-07-23 21:10:51,591 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,592 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:51,593 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-23 21:10:51,593 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:51,598 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=68, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:51,601 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:51,601 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7 empty. 2023-07-23 21:10:51,602 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:51,602 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-23 21:10:51,637 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_foo/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:51,639 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6afbac1fbaba0161b62d9c872b7d72b7, NAME => 'Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:51,655 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:51,655 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing 6afbac1fbaba0161b62d9c872b7d72b7, disabling compactions & flushes 2023-07-23 21:10:51,655 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. 2023-07-23 21:10:51,655 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. 2023-07-23 21:10:51,655 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. after waiting 0 ms 2023-07-23 21:10:51,655 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. 2023-07-23 21:10:51,655 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. 2023-07-23 21:10:51,655 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for 6afbac1fbaba0161b62d9c872b7d72b7: 2023-07-23 21:10:51,659 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=68, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:51,663 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690146651662"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146651662"}]},"ts":"1690146651662"} 2023-07-23 21:10:51,665 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:51,667 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=68, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:51,667 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146651667"}]},"ts":"1690146651667"} 2023-07-23 21:10:51,671 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-23 21:10:51,675 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=68, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=6afbac1fbaba0161b62d9c872b7d72b7, ASSIGN}] 2023-07-23 21:10:51,685 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=68, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=6afbac1fbaba0161b62d9c872b7d72b7, ASSIGN 2023-07-23 21:10:51,687 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=69, ppid=68, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=6afbac1fbaba0161b62d9c872b7d72b7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42335,1690146647320; forceNewPlan=false, retain=false 2023-07-23 21:10:51,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=68 2023-07-23 21:10:51,839 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=6afbac1fbaba0161b62d9c872b7d72b7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:51,839 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690146651839"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146651839"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146651839"}]},"ts":"1690146651839"} 2023-07-23 21:10:51,843 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=69, state=RUNNABLE; OpenRegionProcedure 6afbac1fbaba0161b62d9c872b7d72b7, server=jenkins-hbase4.apache.org,42335,1690146647320}] 2023-07-23 21:10:51,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=68 2023-07-23 21:10:52,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. 2023-07-23 21:10:52,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6afbac1fbaba0161b62d9c872b7d72b7, NAME => 'Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:52,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign 6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:52,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:52,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:52,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:52,003 INFO [StoreOpener-6afbac1fbaba0161b62d9c872b7d72b7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:52,005 DEBUG [StoreOpener-6afbac1fbaba0161b62d9c872b7d72b7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7/f 2023-07-23 21:10:52,005 DEBUG [StoreOpener-6afbac1fbaba0161b62d9c872b7d72b7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7/f 2023-07-23 21:10:52,006 INFO [StoreOpener-6afbac1fbaba0161b62d9c872b7d72b7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6afbac1fbaba0161b62d9c872b7d72b7 columnFamilyName f 2023-07-23 21:10:52,006 INFO [StoreOpener-6afbac1fbaba0161b62d9c872b7d72b7-1] regionserver.HStore(310): Store=6afbac1fbaba0161b62d9c872b7d72b7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:52,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:52,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:52,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:52,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:52,017 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6afbac1fbaba0161b62d9c872b7d72b7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9957222080, jitterRate=-0.07266142964363098}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:52,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6afbac1fbaba0161b62d9c872b7d72b7: 2023-07-23 21:10:52,018 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7., pid=70, masterSystemTime=1690146651996 2023-07-23 21:10:52,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. 2023-07-23 21:10:52,021 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. 2023-07-23 21:10:52,022 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=6afbac1fbaba0161b62d9c872b7d72b7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:52,022 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690146652022"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146652022"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146652022"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146652022"}]},"ts":"1690146652022"} 2023-07-23 21:10:52,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=69 2023-07-23 21:10:52,032 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=69, state=SUCCESS; OpenRegionProcedure 6afbac1fbaba0161b62d9c872b7d72b7, server=jenkins-hbase4.apache.org,42335,1690146647320 in 184 msec 2023-07-23 21:10:52,040 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=68 2023-07-23 21:10:52,040 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=68, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=6afbac1fbaba0161b62d9c872b7d72b7, ASSIGN in 357 msec 2023-07-23 21:10:52,043 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=68, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:52,043 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146652043"}]},"ts":"1690146652043"} 2023-07-23 21:10:52,049 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-23 21:10:52,053 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=68, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:52,056 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=68, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign in 480 msec 2023-07-23 21:10:52,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=68 2023-07-23 21:10:52,194 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 68 completed 2023-07-23 21:10:52,195 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:52,202 INFO [Listener at localhost/38995] client.HBaseAdmin$15(890): Started disable of Group_foo:Group_testCreateAndAssign 2023-07-23 21:10:52,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_foo:Group_testCreateAndAssign 2023-07-23 21:10:52,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:10:52,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-23 21:10:52,214 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146652214"}]},"ts":"1690146652214"} 2023-07-23 21:10:52,217 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-23 21:10:52,220 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_foo:Group_testCreateAndAssign to state=DISABLING 2023-07-23 21:10:52,221 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=71, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=6afbac1fbaba0161b62d9c872b7d72b7, UNASSIGN}] 2023-07-23 21:10:52,224 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, ppid=71, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=6afbac1fbaba0161b62d9c872b7d72b7, UNASSIGN 2023-07-23 21:10:52,225 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=6afbac1fbaba0161b62d9c872b7d72b7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:52,225 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690146652225"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146652225"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146652225"}]},"ts":"1690146652225"} 2023-07-23 21:10:52,227 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=72, state=RUNNABLE; CloseRegionProcedure 6afbac1fbaba0161b62d9c872b7d72b7, server=jenkins-hbase4.apache.org,42335,1690146647320}] 2023-07-23 21:10:52,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-23 21:10:52,379 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:52,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6afbac1fbaba0161b62d9c872b7d72b7, disabling compactions & flushes 2023-07-23 21:10:52,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. 2023-07-23 21:10:52,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. 2023-07-23 21:10:52,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. after waiting 0 ms 2023-07-23 21:10:52,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. 2023-07-23 21:10:52,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:52,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7. 2023-07-23 21:10:52,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6afbac1fbaba0161b62d9c872b7d72b7: 2023-07-23 21:10:52,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:52,388 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=6afbac1fbaba0161b62d9c872b7d72b7, regionState=CLOSED 2023-07-23 21:10:52,388 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690146652388"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146652388"}]},"ts":"1690146652388"} 2023-07-23 21:10:52,391 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=72 2023-07-23 21:10:52,391 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=72, state=SUCCESS; CloseRegionProcedure 6afbac1fbaba0161b62d9c872b7d72b7, server=jenkins-hbase4.apache.org,42335,1690146647320 in 163 msec 2023-07-23 21:10:52,395 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=71 2023-07-23 21:10:52,396 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=71, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=6afbac1fbaba0161b62d9c872b7d72b7, UNASSIGN in 170 msec 2023-07-23 21:10:52,397 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146652397"}]},"ts":"1690146652397"} 2023-07-23 21:10:52,398 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-23 21:10:52,400 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_foo:Group_testCreateAndAssign to state=DISABLED 2023-07-23 21:10:52,402 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign in 198 msec 2023-07-23 21:10:52,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-23 21:10:52,517 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 71 completed 2023-07-23 21:10:52,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_foo:Group_testCreateAndAssign 2023-07-23 21:10:52,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:10:52,522 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:10:52,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_foo:Group_testCreateAndAssign' from rsgroup 'appInfo' 2023-07-23 21:10:52,523 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:10:52,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:52,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:52,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-23 21:10:52,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:52,528 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:52,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-23 21:10:52,534 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7/recovered.edits] 2023-07-23 21:10:52,542 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7/recovered.edits/4.seqid 2023-07-23 21:10:52,543 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_foo/Group_testCreateAndAssign/6afbac1fbaba0161b62d9c872b7d72b7 2023-07-23 21:10:52,543 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-23 21:10:52,546 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:10:52,548 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_foo:Group_testCreateAndAssign from hbase:meta 2023-07-23 21:10:52,552 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_foo:Group_testCreateAndAssign' descriptor. 2023-07-23 21:10:52,553 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:10:52,553 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_foo:Group_testCreateAndAssign' from region states. 2023-07-23 21:10:52,554 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146652553"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:52,556 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 21:10:52,556 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6afbac1fbaba0161b62d9c872b7d72b7, NAME => 'Group_foo:Group_testCreateAndAssign,,1690146651573.6afbac1fbaba0161b62d9c872b7d72b7.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 21:10:52,556 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_foo:Group_testCreateAndAssign' as deleted. 2023-07-23 21:10:52,556 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146652556"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:52,558 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_foo:Group_testCreateAndAssign state from META 2023-07-23 21:10:52,564 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:10:52,566 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign in 46 msec 2023-07-23 21:10:52,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-23 21:10:52,631 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 74 completed 2023-07-23 21:10:52,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-23 21:10:52,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:10:52,647 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=75, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:10:52,651 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=75, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:10:52,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-23 21:10:52,655 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=75, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:10:52,657 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-23 21:10:52,657 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:52,658 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=75, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:10:52,660 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=75, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:10:52,661 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 21 msec 2023-07-23 21:10:52,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-23 21:10:52,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:52,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:52,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:52,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:52,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:52,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:52,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:52,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:52,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:52,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-23 21:10:52,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 21:10:52,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:52,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:52,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:52,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:52,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42335] to rsgroup default 2023-07-23 21:10:52,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:52,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-23 21:10:52,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:52,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-23 21:10:52,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42335,1690146647320] are moved back to appInfo 2023-07-23 21:10:52,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-23 21:10:52,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:52,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup appInfo 2023-07-23 21:10:52,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:52,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:52,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:52,788 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:52,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:52,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:52,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:52,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:52,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:52,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:52,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:52,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:52,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:52,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 367 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147852799, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:52,800 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:52,802 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:52,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:52,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:52,803 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:52,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:52,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:52,822 INFO [Listener at localhost/38995] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=509 (was 507) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1206779787_17 at /127.0.0.1:44502 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=795 (was 795), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=479 (was 479), ProcessCount=173 (was 173), AvailableMemoryMB=8028 (was 8094) 2023-07-23 21:10:52,822 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-23 21:10:52,837 INFO [Listener at localhost/38995] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=509, OpenFileDescriptor=795, MaxFileDescriptor=60000, SystemLoadAverage=479, ProcessCount=173, AvailableMemoryMB=8027 2023-07-23 21:10:52,837 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-23 21:10:52,837 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(132): testCreateAndDrop 2023-07-23 21:10:52,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:52,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:52,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:52,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:52,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:52,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:52,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:52,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:52,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:52,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:52,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:52,853 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:52,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:52,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:52,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:52,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:52,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:52,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:52,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:52,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:52,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:52,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 395 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147852867, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:52,868 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:52,870 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:52,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:52,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:52,871 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:52,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:52,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:52,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:52,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=76, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:10:52,877 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=76, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:52,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndDrop" procId is: 76 2023-07-23 21:10:52,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=76 2023-07-23 21:10:52,879 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:52,879 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:52,880 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:52,882 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=76, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:52,884 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:52,884 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a empty. 2023-07-23 21:10:52,885 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:52,885 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-23 21:10:52,905 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:52,906 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5668ecfc8fd3bdcd338be285a90d341a, NAME => 'Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:52,919 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:52,919 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1604): Closing 5668ecfc8fd3bdcd338be285a90d341a, disabling compactions & flushes 2023-07-23 21:10:52,919 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. 2023-07-23 21:10:52,919 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. 2023-07-23 21:10:52,919 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. after waiting 0 ms 2023-07-23 21:10:52,919 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. 2023-07-23 21:10:52,919 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. 2023-07-23 21:10:52,919 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 5668ecfc8fd3bdcd338be285a90d341a: 2023-07-23 21:10:52,922 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=76, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:52,923 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690146652923"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146652923"}]},"ts":"1690146652923"} 2023-07-23 21:10:52,925 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:52,926 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=76, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:52,926 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146652926"}]},"ts":"1690146652926"} 2023-07-23 21:10:52,927 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLING in hbase:meta 2023-07-23 21:10:52,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:52,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:52,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:52,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:52,932 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-23 21:10:52,932 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:52,932 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=5668ecfc8fd3bdcd338be285a90d341a, ASSIGN}] 2023-07-23 21:10:52,933 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=77, ppid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=5668ecfc8fd3bdcd338be285a90d341a, ASSIGN 2023-07-23 21:10:52,934 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=77, ppid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=5668ecfc8fd3bdcd338be285a90d341a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42335,1690146647320; forceNewPlan=false, retain=false 2023-07-23 21:10:52,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=76 2023-07-23 21:10:53,084 INFO [jenkins-hbase4:35573] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:53,086 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=77 updating hbase:meta row=5668ecfc8fd3bdcd338be285a90d341a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:53,086 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690146653086"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146653086"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146653086"}]},"ts":"1690146653086"} 2023-07-23 21:10:53,088 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=77, state=RUNNABLE; OpenRegionProcedure 5668ecfc8fd3bdcd338be285a90d341a, server=jenkins-hbase4.apache.org,42335,1690146647320}] 2023-07-23 21:10:53,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=76 2023-07-23 21:10:53,244 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. 2023-07-23 21:10:53,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5668ecfc8fd3bdcd338be285a90d341a, NAME => 'Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:53,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndDrop 5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:53,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:53,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:53,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:53,256 INFO [StoreOpener-5668ecfc8fd3bdcd338be285a90d341a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region 5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:53,258 DEBUG [StoreOpener-5668ecfc8fd3bdcd338be285a90d341a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a/cf 2023-07-23 21:10:53,258 DEBUG [StoreOpener-5668ecfc8fd3bdcd338be285a90d341a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a/cf 2023-07-23 21:10:53,259 INFO [StoreOpener-5668ecfc8fd3bdcd338be285a90d341a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5668ecfc8fd3bdcd338be285a90d341a columnFamilyName cf 2023-07-23 21:10:53,259 INFO [StoreOpener-5668ecfc8fd3bdcd338be285a90d341a-1] regionserver.HStore(310): Store=5668ecfc8fd3bdcd338be285a90d341a/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:53,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:53,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:53,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:53,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:53,275 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5668ecfc8fd3bdcd338be285a90d341a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10759617760, jitterRate=0.0020674914121627808}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:53,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5668ecfc8fd3bdcd338be285a90d341a: 2023-07-23 21:10:53,276 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a., pid=78, masterSystemTime=1690146653240 2023-07-23 21:10:53,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. 2023-07-23 21:10:53,277 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. 2023-07-23 21:10:53,469 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=77 updating hbase:meta row=5668ecfc8fd3bdcd338be285a90d341a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:53,469 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690146653278"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146653278"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146653278"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146653278"}]},"ts":"1690146653278"} 2023-07-23 21:10:53,477 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=77 2023-07-23 21:10:53,477 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=77, state=SUCCESS; OpenRegionProcedure 5668ecfc8fd3bdcd338be285a90d341a, server=jenkins-hbase4.apache.org,42335,1690146647320 in 387 msec 2023-07-23 21:10:53,480 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=76 2023-07-23 21:10:53,480 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=76, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=5668ecfc8fd3bdcd338be285a90d341a, ASSIGN in 545 msec 2023-07-23 21:10:53,480 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=76, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:53,481 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146653481"}]},"ts":"1690146653481"} 2023-07-23 21:10:53,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=76 2023-07-23 21:10:53,482 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLED in hbase:meta 2023-07-23 21:10:53,485 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=76, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:53,487 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop in 611 msec 2023-07-23 21:10:53,532 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testCreateAndDrop' 2023-07-23 21:10:53,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=76 2023-07-23 21:10:53,983 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndDrop, procId: 76 completed 2023-07-23 21:10:53,983 DEBUG [Listener at localhost/38995] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateAndDrop get assigned. Timeout = 60000ms 2023-07-23 21:10:53,983 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:53,989 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateAndDrop assigned to meta. Checking AM states. 2023-07-23 21:10:53,990 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:53,990 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateAndDrop assigned. 2023-07-23 21:10:53,990 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:53,995 INFO [Listener at localhost/38995] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndDrop 2023-07-23 21:10:53,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCreateAndDrop 2023-07-23 21:10:53,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=79, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:10:54,001 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146654001"}]},"ts":"1690146654001"} 2023-07-23 21:10:54,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=79 2023-07-23 21:10:54,003 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLING in hbase:meta 2023-07-23 21:10:54,004 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCreateAndDrop to state=DISABLING 2023-07-23 21:10:54,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=5668ecfc8fd3bdcd338be285a90d341a, UNASSIGN}] 2023-07-23 21:10:54,007 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=80, ppid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=5668ecfc8fd3bdcd338be285a90d341a, UNASSIGN 2023-07-23 21:10:54,008 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=80 updating hbase:meta row=5668ecfc8fd3bdcd338be285a90d341a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:54,008 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690146654008"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146654008"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146654008"}]},"ts":"1690146654008"} 2023-07-23 21:10:54,010 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=81, ppid=80, state=RUNNABLE; CloseRegionProcedure 5668ecfc8fd3bdcd338be285a90d341a, server=jenkins-hbase4.apache.org,42335,1690146647320}] 2023-07-23 21:10:54,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=79 2023-07-23 21:10:54,164 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:54,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5668ecfc8fd3bdcd338be285a90d341a, disabling compactions & flushes 2023-07-23 21:10:54,165 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. 2023-07-23 21:10:54,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. 2023-07-23 21:10:54,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. after waiting 0 ms 2023-07-23 21:10:54,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. 2023-07-23 21:10:54,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:54,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a. 2023-07-23 21:10:54,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5668ecfc8fd3bdcd338be285a90d341a: 2023-07-23 21:10:54,174 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:54,175 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=80 updating hbase:meta row=5668ecfc8fd3bdcd338be285a90d341a, regionState=CLOSED 2023-07-23 21:10:54,175 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690146654175"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146654175"}]},"ts":"1690146654175"} 2023-07-23 21:10:54,178 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=81, resume processing ppid=80 2023-07-23 21:10:54,178 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=81, ppid=80, state=SUCCESS; CloseRegionProcedure 5668ecfc8fd3bdcd338be285a90d341a, server=jenkins-hbase4.apache.org,42335,1690146647320 in 167 msec 2023-07-23 21:10:54,184 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=79 2023-07-23 21:10:54,185 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=79, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=5668ecfc8fd3bdcd338be285a90d341a, UNASSIGN in 172 msec 2023-07-23 21:10:54,186 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146654186"}]},"ts":"1690146654186"} 2023-07-23 21:10:54,187 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLED in hbase:meta 2023-07-23 21:10:54,189 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testCreateAndDrop to state=DISABLED 2023-07-23 21:10:54,192 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=79, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop in 194 msec 2023-07-23 21:10:54,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=79 2023-07-23 21:10:54,305 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndDrop, procId: 79 completed 2023-07-23 21:10:54,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCreateAndDrop 2023-07-23 21:10:54,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=82, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:10:54,309 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=82, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:10:54,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndDrop' from rsgroup 'default' 2023-07-23 21:10:54,309 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=82, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:10:54,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:54,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:54,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:54,316 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:54,319 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a/cf, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a/recovered.edits] 2023-07-23 21:10:54,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-23 21:10:54,327 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a/recovered.edits/4.seqid 2023-07-23 21:10:54,327 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCreateAndDrop/5668ecfc8fd3bdcd338be285a90d341a 2023-07-23 21:10:54,328 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-23 21:10:54,330 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=82, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:10:54,333 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndDrop from hbase:meta 2023-07-23 21:10:54,334 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndDrop' descriptor. 2023-07-23 21:10:54,336 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=82, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:10:54,336 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndDrop' from region states. 2023-07-23 21:10:54,336 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146654336"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:54,337 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 21:10:54,338 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5668ecfc8fd3bdcd338be285a90d341a, NAME => 'Group_testCreateAndDrop,,1690146652874.5668ecfc8fd3bdcd338be285a90d341a.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 21:10:54,338 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndDrop' as deleted. 2023-07-23 21:10:54,338 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146654338"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:54,339 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndDrop state from META 2023-07-23 21:10:54,341 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=82, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:10:54,343 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop in 36 msec 2023-07-23 21:10:54,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-23 21:10:54,422 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndDrop, procId: 82 completed 2023-07-23 21:10:54,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:54,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:54,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:54,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:54,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:54,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:54,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:54,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:54,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:54,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:54,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:54,438 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:54,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:54,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:54,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:54,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:54,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:54,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:54,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:54,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:54,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:54,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 456 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147854451, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:54,452 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:54,456 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:54,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:54,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:54,457 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:54,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:54,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:54,475 INFO [Listener at localhost/38995] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=507 (was 509), OpenFileDescriptor=792 (was 795), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=480 (was 479) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=7940 (was 8027) 2023-07-23 21:10:54,475 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-23 21:10:54,490 INFO [Listener at localhost/38995] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=507, OpenFileDescriptor=792, MaxFileDescriptor=60000, SystemLoadAverage=480, ProcessCount=173, AvailableMemoryMB=7939 2023-07-23 21:10:54,490 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-23 21:10:54,491 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(132): testCloneSnapshot 2023-07-23 21:10:54,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:54,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:54,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:54,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:54,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:54,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:54,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:54,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:54,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:54,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:54,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:54,521 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:54,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:54,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:54,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:54,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:54,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:54,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:54,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:54,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:54,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:54,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 484 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147854534, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:54,535 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:54,537 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:54,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:54,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:54,539 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:54,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:54,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:54,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:54,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=83, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:10:54,545 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=83, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:54,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCloneSnapshot" procId is: 83 2023-07-23 21:10:54,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-23 21:10:54,550 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:54,550 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:54,551 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:54,553 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=83, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:54,555 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:54,556 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44 empty. 2023-07-23 21:10:54,556 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:54,556 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-23 21:10:54,572 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:54,574 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => 42ba9449174a797921c5780d2ae25c44, NAME => 'Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:54,595 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:54,595 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1604): Closing 42ba9449174a797921c5780d2ae25c44, disabling compactions & flushes 2023-07-23 21:10:54,596 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:54,596 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:54,596 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. after waiting 0 ms 2023-07-23 21:10:54,596 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:54,596 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:54,596 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for 42ba9449174a797921c5780d2ae25c44: 2023-07-23 21:10:54,598 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=83, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:54,599 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690146654599"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146654599"}]},"ts":"1690146654599"} 2023-07-23 21:10:54,601 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:54,602 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=83, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:54,602 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146654602"}]},"ts":"1690146654602"} 2023-07-23 21:10:54,603 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLING in hbase:meta 2023-07-23 21:10:54,608 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:54,608 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:54,608 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:54,608 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:54,608 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-23 21:10:54,608 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:54,608 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=84, ppid=83, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=42ba9449174a797921c5780d2ae25c44, ASSIGN}] 2023-07-23 21:10:54,610 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, ppid=83, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=42ba9449174a797921c5780d2ae25c44, ASSIGN 2023-07-23 21:10:54,611 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=84, ppid=83, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=42ba9449174a797921c5780d2ae25c44, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42727,1690146641774; forceNewPlan=false, retain=false 2023-07-23 21:10:54,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-23 21:10:54,761 INFO [jenkins-hbase4:35573] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:54,762 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=42ba9449174a797921c5780d2ae25c44, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:54,763 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690146654762"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146654762"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146654762"}]},"ts":"1690146654762"} 2023-07-23 21:10:54,765 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; OpenRegionProcedure 42ba9449174a797921c5780d2ae25c44, server=jenkins-hbase4.apache.org,42727,1690146641774}] 2023-07-23 21:10:54,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-23 21:10:54,921 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:54,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 42ba9449174a797921c5780d2ae25c44, NAME => 'Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:54,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot 42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:54,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:54,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:54,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:54,924 INFO [StoreOpener-42ba9449174a797921c5780d2ae25c44-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region 42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:54,926 DEBUG [StoreOpener-42ba9449174a797921c5780d2ae25c44-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44/test 2023-07-23 21:10:54,926 DEBUG [StoreOpener-42ba9449174a797921c5780d2ae25c44-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44/test 2023-07-23 21:10:54,927 INFO [StoreOpener-42ba9449174a797921c5780d2ae25c44-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 42ba9449174a797921c5780d2ae25c44 columnFamilyName test 2023-07-23 21:10:54,927 INFO [StoreOpener-42ba9449174a797921c5780d2ae25c44-1] regionserver.HStore(310): Store=42ba9449174a797921c5780d2ae25c44/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:54,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:54,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:54,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:54,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:54,934 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 42ba9449174a797921c5780d2ae25c44; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9523791840, jitterRate=-0.11302776634693146}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:54,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 42ba9449174a797921c5780d2ae25c44: 2023-07-23 21:10:54,936 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44., pid=85, masterSystemTime=1690146654916 2023-07-23 21:10:54,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:54,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:54,939 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=42ba9449174a797921c5780d2ae25c44, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:54,939 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690146654938"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146654938"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146654938"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146654938"}]},"ts":"1690146654938"} 2023-07-23 21:10:54,942 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-23 21:10:54,942 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; OpenRegionProcedure 42ba9449174a797921c5780d2ae25c44, server=jenkins-hbase4.apache.org,42727,1690146641774 in 175 msec 2023-07-23 21:10:54,944 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=84, resume processing ppid=83 2023-07-23 21:10:54,944 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, ppid=83, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=42ba9449174a797921c5780d2ae25c44, ASSIGN in 334 msec 2023-07-23 21:10:54,945 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=83, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:54,945 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146654945"}]},"ts":"1690146654945"} 2023-07-23 21:10:54,946 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLED in hbase:meta 2023-07-23 21:10:54,948 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=83, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:54,950 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot in 406 msec 2023-07-23 21:10:55,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-23 21:10:55,152 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCloneSnapshot, procId: 83 completed 2023-07-23 21:10:55,152 DEBUG [Listener at localhost/38995] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCloneSnapshot get assigned. Timeout = 60000ms 2023-07-23 21:10:55,152 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:55,157 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(3484): All regions for table Group_testCloneSnapshot assigned to meta. Checking AM states. 2023-07-23 21:10:55,157 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:55,157 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(3504): All regions for table Group_testCloneSnapshot assigned. 2023-07-23 21:10:55,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1583): Client=jenkins//172.31.14.131 snapshot request for:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-23 21:10:55,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] snapshot.SnapshotDescriptionUtils(316): Creation time not specified, setting to:1690146655169 (current time:1690146655169). 2023-07-23 21:10:55,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] snapshot.SnapshotDescriptionUtils(332): Snapshot current TTL value: 0 resetting it to default value: 0 2023-07-23 21:10:55,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] zookeeper.ReadOnlyZKClient(139): Connect 0x44d94119 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:55,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@bc1d335, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:55,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:55,182 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55272, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:55,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x44d94119 to 127.0.0.1:59847 2023-07-23 21:10:55,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:55,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] snapshot.SnapshotManager(601): No existing snapshot, attempting snapshot... 2023-07-23 21:10:55,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] snapshot.SnapshotManager(648): Table enabled, starting distributed snapshots for { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-23 21:10:55,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=86, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-23 21:10:55,212 DEBUG [PEWorker-5] locking.LockProcedure(309): LOCKED pid=86, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-23 21:10:55,213 INFO [PEWorker-5] procedure2.TimeoutExecutorThread(81): ADDED pid=86, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE; timeout=600000, timestamp=1690147255213 2023-07-23 21:10:55,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] snapshot.SnapshotManager(653): Started snapshot: { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-23 21:10:55,214 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(174): Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot 2023-07-23 21:10:55,216 DEBUG [PEWorker-3] locking.LockProcedure(242): UNLOCKED pid=86, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-23 21:10:55,218 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-23 21:10:55,219 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE in 14 msec 2023-07-23 21:10:55,220 DEBUG [PEWorker-3] locking.LockProcedure(309): LOCKED pid=87, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-23 21:10:55,220 INFO [PEWorker-3] procedure2.TimeoutExecutorThread(81): ADDED pid=87, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED; timeout=600000, timestamp=1690147255220 2023-07-23 21:10:55,236 DEBUG [Listener at localhost/38995] client.HBaseAdmin(2418): Waiting a max of 300000 ms for snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }'' to complete. (max 20000 ms per retry) 2023-07-23 21:10:55,236 DEBUG [Listener at localhost/38995] client.HBaseAdmin(2428): (#1) Sleeping: 100ms while waiting for snapshot completion. 2023-07-23 21:10:55,265 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] procedure.ProcedureCoordinator(165): Submitting procedure Group_testCloneSnapshot_snap 2023-07-23 21:10:55,265 INFO [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'Group_testCloneSnapshot_snap' 2023-07-23 21:10:55,266 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-23 21:10:55,266 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'Group_testCloneSnapshot_snap' starting 'acquire' 2023-07-23 21:10:55,267 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'Group_testCloneSnapshot_snap', kicking off acquire phase on members. 2023-07-23 21:10:55,267 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,267 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,269 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-23 21:10:55,269 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-23 21:10:55,269 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-23 21:10:55,269 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-23 21:10:55,269 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-23 21:10:55,269 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-23 21:10:55,269 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:55,269 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-23 21:10:55,269 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,269 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:55,269 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:55,269 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-23 21:10:55,270 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:55,270 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,270 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-07-23 21:10:55,270 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,270 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,270 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,270 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,271 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,271 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,271 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,271 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,271 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-23 21:10:55,271 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,271 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-23 21:10:55,272 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,272 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-23 21:10:55,271 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-23 21:10:55,272 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,272 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-23 21:10:55,272 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-23 21:10:55,271 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-23 21:10:55,272 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,272 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-23 21:10:55,278 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-23 21:10:55,278 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-23 21:10:55,278 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-23 21:10:55,278 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-23 21:10:55,279 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-23 21:10:55,279 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-23 21:10:55,279 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-23 21:10:55,279 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-23 21:10:55,283 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-23 21:10:55,283 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-23 21:10:55,283 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-23 21:10:55,283 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-23 21:10:55,283 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-23 21:10:55,284 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,46485,1690146642211' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-23 21:10:55,284 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-23 21:10:55,284 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-23 21:10:55,284 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-23 21:10:55,285 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-23 21:10:55,285 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,45637,1690146645550' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-23 21:10:55,283 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-23 21:10:55,285 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,42335,1690146647320' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-23 21:10:55,286 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-23 21:10:55,286 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-23 21:10:55,286 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,42727,1690146641774' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-23 21:10:55,289 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,290 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,295 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,295 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,295 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,295 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-23 21:10:55,295 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,295 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-23 21:10:55,295 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,295 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,295 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-23 21:10:55,295 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-23 21:10:55,296 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,296 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-23 21:10:55,296 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-23 21:10:55,297 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-23 21:10:55,297 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,297 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-23 21:10:55,297 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-23 21:10:55,297 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:55,298 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:55,298 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:55,298 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,299 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-23 21:10:55,299 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,42727,1690146641774' joining acquired barrier for procedure 'Group_testCloneSnapshot_snap' on coordinator 2023-07-23 21:10:55,300 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@6b1243ab[Count = 0] remaining members to acquire global barrier 2023-07-23 21:10:55,300 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'Group_testCloneSnapshot_snap' starting 'in-barrier' execution. 2023-07-23 21:10:55,300 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,306 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,306 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,306 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,306 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,306 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,306 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,306 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,306 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-23 21:10:55,307 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,306 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,307 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,306 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-23 21:10:55,307 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-23 21:10:55,307 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,307 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,307 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-23 21:10:55,307 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-23 21:10:55,307 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-23 21:10:55,307 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-23 21:10:55,307 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,46485,1690146642211' in zk 2023-07-23 21:10:55,307 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,45637,1690146645550' in zk 2023-07-23 21:10:55,307 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,307 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,42335,1690146647320' in zk 2023-07-23 21:10:55,308 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-07-23 21:10:55,308 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] snapshot.FlushSnapshotSubprocedure(170): Flush Snapshot Tasks submitted for 1 regions 2023-07-23 21:10:55,308 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(301): Waiting for local region snapshots to finish. 2023-07-23 21:10:55,308 DEBUG [rs(jenkins-hbase4.apache.org,42727,1690146641774)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Starting snapshot operation on Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:55,309 DEBUG [rs(jenkins-hbase4.apache.org,42727,1690146641774)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(110): Flush Snapshotting region Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. started... 2023-07-23 21:10:55,310 DEBUG [rs(jenkins-hbase4.apache.org,42727,1690146641774)-snapshot-pool-0] regionserver.HRegion(2446): Flush status journal for 42ba9449174a797921c5780d2ae25c44: 2023-07-23 21:10:55,310 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-23 21:10:55,310 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-23 21:10:55,310 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-23 21:10:55,310 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-23 21:10:55,311 DEBUG [member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-23 21:10:55,312 DEBUG [member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-23 21:10:55,313 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-23 21:10:55,313 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-23 21:10:55,313 DEBUG [member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-23 21:10:55,314 DEBUG [rs(jenkins-hbase4.apache.org,42727,1690146641774)-snapshot-pool-0] snapshot.SnapshotManifest(238): Storing 'Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44.' region-info for snapshot=Group_testCloneSnapshot_snap 2023-07-23 21:10:55,320 DEBUG [rs(jenkins-hbase4.apache.org,42727,1690146641774)-snapshot-pool-0] snapshot.SnapshotManifest(243): Creating references for hfiles 2023-07-23 21:10:55,324 DEBUG [rs(jenkins-hbase4.apache.org,42727,1690146641774)-snapshot-pool-0] snapshot.SnapshotManifest(253): Adding snapshot references for [] hfiles 2023-07-23 21:10:55,336 DEBUG [Listener at localhost/38995] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-23 21:10:55,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-23 21:10:55,342 DEBUG [rs(jenkins-hbase4.apache.org,42727,1690146641774)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(137): ... Flush Snapshotting region Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. completed. 2023-07-23 21:10:55,342 DEBUG [rs(jenkins-hbase4.apache.org,42727,1690146641774)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(140): Closing snapshot operation on Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:55,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] snapshot.SnapshotManager(404): Snapshoting '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' is still in progress! 2023-07-23 21:10:55,342 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(312): Completed 1/1 local region snapshots. 2023-07-23 21:10:55,342 DEBUG [Listener at localhost/38995] client.HBaseAdmin(2428): (#2) Sleeping: 200ms while waiting for snapshot completion. 2023-07-23 21:10:55,342 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(314): Completed 1 local region snapshots. 2023-07-23 21:10:55,343 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(345): cancelling 0 tasks for snapshot jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,343 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-23 21:10:55,343 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,42727,1690146641774' in zk 2023-07-23 21:10:55,345 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-23 21:10:55,345 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,345 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-23 21:10:55,346 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,346 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-23 21:10:55,346 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-23 21:10:55,346 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-23 21:10:55,347 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-23 21:10:55,347 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-23 21:10:55,348 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-23 21:10:55,348 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:55,349 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:55,350 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:55,351 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,351 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-23 21:10:55,351 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-23 21:10:55,352 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:55,352 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:55,353 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:55,353 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,354 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'Group_testCloneSnapshot_snap' member 'jenkins-hbase4.apache.org,42727,1690146641774': 2023-07-23 21:10:55,354 INFO [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'Group_testCloneSnapshot_snap' execution completed 2023-07-23 21:10:55,354 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-07-23 21:10:55,354 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,42727,1690146641774' released barrier for procedure'Group_testCloneSnapshot_snap', counting down latch. Waiting for 0 more 2023-07-23 21:10:55,354 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-07-23 21:10:55,354 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:Group_testCloneSnapshot_snap 2023-07-23 21:10:55,354 INFO [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure Group_testCloneSnapshot_snapincluding nodes /hbase/online-snapshot/acquired /hbase/online-snapshot/reached /hbase/online-snapshot/abort 2023-07-23 21:10:55,357 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,357 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-23 21:10:55,357 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,357 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,357 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,357 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,357 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,357 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,357 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,357 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-23 21:10:55,357 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-23 21:10:55,357 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,357 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,357 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,358 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,358 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,358 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-23 21:10:55,358 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-23 21:10:55,358 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,358 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-23 21:10:55,358 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-23 21:10:55,358 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-23 21:10:55,358 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:55,358 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:55,358 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-23 21:10:55,358 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:55,359 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-23 21:10:55,359 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:55,360 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,360 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,360 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:55,360 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-23 21:10:55,360 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,360 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,361 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:55,361 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-23 21:10:55,361 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:55,361 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-23 21:10:55,362 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,362 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-23 21:10:55,362 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:55,363 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:55,363 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:55,364 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,364 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-23 21:10:55,364 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:55,365 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:55,365 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-23 21:10:55,365 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:55,366 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:55,366 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,366 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:55,367 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:55,367 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,370 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-23 21:10:55,370 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,370 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-23 21:10:55,370 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-23 21:10:55,370 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-23 21:10:55,371 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:55,371 DEBUG [(jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-23 21:10:55,371 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-23 21:10:55,370 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,370 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-23 21:10:55,370 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-23 21:10:55,372 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-23 21:10:55,372 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-23 21:10:55,372 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:55,372 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:55,372 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-23 21:10:55,372 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:55,372 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.EnabledTableSnapshotHandler(97): Done waiting - online snapshot for Group_testCloneSnapshot_snap 2023-07-23 21:10:55,371 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-23 21:10:55,373 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-23 21:10:55,373 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-23 21:10:55,373 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.SnapshotManifest(484): Convert to Single Snapshot Manifest for Group_testCloneSnapshot_snap 2023-07-23 21:10:55,373 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-23 21:10:55,373 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-23 21:10:55,373 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:55,372 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:55,373 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:55,373 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,373 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,373 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,373 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:55,373 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:55,373 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:55,373 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:55,373 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:55,373 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:55,373 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,374 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,375 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-23 21:10:55,375 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:55,375 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.SnapshotManifestV1(126): No regions under directory:hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,435 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.SnapshotDescriptionUtils(404): Sentinel is done, just moving the snapshot from hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.hbase-snapshot/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,489 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(229): Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed 2023-07-23 21:10:55,489 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(246): Launching cleanup of working dir:hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,489 ERROR [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(251): Couldn't delete snapshot working directory:hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-23 21:10:55,489 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(257): Table snapshot journal : Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot at 1690146655214Consolidate snapshot: Group_testCloneSnapshot_snap at 1690146655373 (+159 ms)Loading Region manifests for Group_testCloneSnapshot_snap at 1690146655373Writing data manifest for Group_testCloneSnapshot_snap at 1690146655386 (+13 ms)Verifying snapshot: Group_testCloneSnapshot_snap at 1690146655423 (+37 ms)Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed at 1690146655489 (+66 ms) 2023-07-23 21:10:55,491 DEBUG [PEWorker-2] locking.LockProcedure(242): UNLOCKED pid=87, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-23 21:10:55,492 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED in 275 msec 2023-07-23 21:10:55,543 DEBUG [Listener at localhost/38995] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-23 21:10:55,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-23 21:10:55,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] snapshot.SnapshotManager(401): Snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' has completed, notifying client. 2023-07-23 21:10:55,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint(486): Pre-moving table Group_testCloneSnapshot_clone to RSGroup default 2023-07-23 21:10:55,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:55,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:55,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:55,561 ERROR [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(742): TableDescriptor of table {} not found. Skipping the region movement of this table. 2023-07-23 21:10:55,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:CLONE_SNAPSHOT_PRE_OPERATION; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690146655169 type: FLUSH version: 2 ttl: 0 ) 2023-07-23 21:10:55,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] snapshot.SnapshotManager(750): Clone snapshot=Group_testCloneSnapshot_snap as table=Group_testCloneSnapshot_clone 2023-07-23 21:10:55,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-23 21:10:55,619 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot_clone/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:55,627 INFO [PEWorker-4] snapshot.RestoreSnapshotHelper(177): starting restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690146655169 type: FLUSH version: 2 ttl: 0 2023-07-23 21:10:55,628 DEBUG [PEWorker-4] snapshot.RestoreSnapshotHelper(785): get table regions: hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot_clone 2023-07-23 21:10:55,629 INFO [PEWorker-4] snapshot.RestoreSnapshotHelper(239): region to add: 42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:55,629 INFO [PEWorker-4] snapshot.RestoreSnapshotHelper(585): clone region=42ba9449174a797921c5780d2ae25c44 as c34f5b490309c5d34e478fc247221ea6 in snapshot Group_testCloneSnapshot_snap 2023-07-23 21:10:55,630 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => c34f5b490309c5d34e478fc247221ea6, NAME => 'Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot_clone', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:55,651 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:55,651 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1604): Closing c34f5b490309c5d34e478fc247221ea6, disabling compactions & flushes 2023-07-23 21:10:55,651 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. 2023-07-23 21:10:55,651 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. 2023-07-23 21:10:55,651 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. after waiting 0 ms 2023-07-23 21:10:55,651 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. 2023-07-23 21:10:55,651 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. 2023-07-23 21:10:55,651 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for c34f5b490309c5d34e478fc247221ea6: 2023-07-23 21:10:55,651 INFO [PEWorker-4] snapshot.RestoreSnapshotHelper(266): finishing restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690146655169 type: FLUSH version: 2 ttl: 0 2023-07-23 21:10:55,652 INFO [PEWorker-4] procedure.CloneSnapshotProcedure$1(421): Clone snapshot=Group_testCloneSnapshot_snap on table=Group_testCloneSnapshot_clone completed! 2023-07-23 21:10:55,653 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 21:10:55,662 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690146655662"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146655662"}]},"ts":"1690146655662"} 2023-07-23 21:10:55,664 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:55,665 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146655665"}]},"ts":"1690146655665"} 2023-07-23 21:10:55,666 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLING in hbase:meta 2023-07-23 21:10:55,670 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:55,670 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:55,670 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:55,670 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:55,670 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-23 21:10:55,670 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:55,670 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=c34f5b490309c5d34e478fc247221ea6, ASSIGN}] 2023-07-23 21:10:55,672 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=c34f5b490309c5d34e478fc247221ea6, ASSIGN 2023-07-23 21:10:55,673 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=c34f5b490309c5d34e478fc247221ea6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42727,1690146641774; forceNewPlan=false, retain=false 2023-07-23 21:10:55,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-23 21:10:55,824 INFO [jenkins-hbase4:35573] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:55,825 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=c34f5b490309c5d34e478fc247221ea6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:55,825 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690146655825"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146655825"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146655825"}]},"ts":"1690146655825"} 2023-07-23 21:10:55,828 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE; OpenRegionProcedure c34f5b490309c5d34e478fc247221ea6, server=jenkins-hbase4.apache.org,42727,1690146641774}] 2023-07-23 21:10:55,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-23 21:10:55,986 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. 2023-07-23 21:10:55,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c34f5b490309c5d34e478fc247221ea6, NAME => 'Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:55,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot_clone c34f5b490309c5d34e478fc247221ea6 2023-07-23 21:10:55,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:55,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c34f5b490309c5d34e478fc247221ea6 2023-07-23 21:10:55,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c34f5b490309c5d34e478fc247221ea6 2023-07-23 21:10:55,988 INFO [StoreOpener-c34f5b490309c5d34e478fc247221ea6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region c34f5b490309c5d34e478fc247221ea6 2023-07-23 21:10:55,989 DEBUG [StoreOpener-c34f5b490309c5d34e478fc247221ea6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCloneSnapshot_clone/c34f5b490309c5d34e478fc247221ea6/test 2023-07-23 21:10:55,989 DEBUG [StoreOpener-c34f5b490309c5d34e478fc247221ea6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCloneSnapshot_clone/c34f5b490309c5d34e478fc247221ea6/test 2023-07-23 21:10:55,990 INFO [StoreOpener-c34f5b490309c5d34e478fc247221ea6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c34f5b490309c5d34e478fc247221ea6 columnFamilyName test 2023-07-23 21:10:55,990 INFO [StoreOpener-c34f5b490309c5d34e478fc247221ea6-1] regionserver.HStore(310): Store=c34f5b490309c5d34e478fc247221ea6/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:55,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCloneSnapshot_clone/c34f5b490309c5d34e478fc247221ea6 2023-07-23 21:10:55,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCloneSnapshot_clone/c34f5b490309c5d34e478fc247221ea6 2023-07-23 21:10:55,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c34f5b490309c5d34e478fc247221ea6 2023-07-23 21:10:55,998 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCloneSnapshot_clone/c34f5b490309c5d34e478fc247221ea6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:55,999 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c34f5b490309c5d34e478fc247221ea6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9674922560, jitterRate=-0.09895262122154236}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:55,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c34f5b490309c5d34e478fc247221ea6: 2023-07-23 21:10:55,999 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6., pid=90, masterSystemTime=1690146655982 2023-07-23 21:10:56,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. 2023-07-23 21:10:56,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. 2023-07-23 21:10:56,002 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=c34f5b490309c5d34e478fc247221ea6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:56,002 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690146656001"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146656001"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146656001"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146656001"}]},"ts":"1690146656001"} 2023-07-23 21:10:56,006 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-23 21:10:56,006 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; OpenRegionProcedure c34f5b490309c5d34e478fc247221ea6, server=jenkins-hbase4.apache.org,42727,1690146641774 in 176 msec 2023-07-23 21:10:56,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-23 21:10:56,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=c34f5b490309c5d34e478fc247221ea6, ASSIGN in 336 msec 2023-07-23 21:10:56,009 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146656009"}]},"ts":"1690146656009"} 2023-07-23 21:10:56,010 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLED in hbase:meta 2023-07-23 21:10:56,014 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690146655169 type: FLUSH version: 2 ttl: 0 ) in 445 msec 2023-07-23 21:10:56,067 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testCloneSnapshot' 2023-07-23 21:10:56,068 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testCloneSnapshot_clone' 2023-07-23 21:10:56,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-23 21:10:56,188 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: MODIFY, Table Name: default:Group_testCloneSnapshot_clone, procId: 88 completed 2023-07-23 21:10:56,190 INFO [Listener at localhost/38995] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot 2023-07-23 21:10:56,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCloneSnapshot 2023-07-23 21:10:56,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:10:56,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-23 21:10:56,196 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146656196"}]},"ts":"1690146656196"} 2023-07-23 21:10:56,198 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLING in hbase:meta 2023-07-23 21:10:56,206 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot to state=DISABLING 2023-07-23 21:10:56,207 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=42ba9449174a797921c5780d2ae25c44, UNASSIGN}] 2023-07-23 21:10:56,209 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=42ba9449174a797921c5780d2ae25c44, UNASSIGN 2023-07-23 21:10:56,210 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=42ba9449174a797921c5780d2ae25c44, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:56,210 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690146656210"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146656210"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146656210"}]},"ts":"1690146656210"} 2023-07-23 21:10:56,211 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; CloseRegionProcedure 42ba9449174a797921c5780d2ae25c44, server=jenkins-hbase4.apache.org,42727,1690146641774}] 2023-07-23 21:10:56,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-23 21:10:56,363 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:56,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 42ba9449174a797921c5780d2ae25c44, disabling compactions & flushes 2023-07-23 21:10:56,364 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:56,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:56,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. after waiting 0 ms 2023-07-23 21:10:56,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:56,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44/recovered.edits/5.seqid, newMaxSeqId=5, maxSeqId=1 2023-07-23 21:10:56,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44. 2023-07-23 21:10:56,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 42ba9449174a797921c5780d2ae25c44: 2023-07-23 21:10:56,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:56,376 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=42ba9449174a797921c5780d2ae25c44, regionState=CLOSED 2023-07-23 21:10:56,376 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690146656376"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146656376"}]},"ts":"1690146656376"} 2023-07-23 21:10:56,379 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-23 21:10:56,379 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; CloseRegionProcedure 42ba9449174a797921c5780d2ae25c44, server=jenkins-hbase4.apache.org,42727,1690146641774 in 166 msec 2023-07-23 21:10:56,380 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-23 21:10:56,380 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=42ba9449174a797921c5780d2ae25c44, UNASSIGN in 172 msec 2023-07-23 21:10:56,381 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146656381"}]},"ts":"1690146656381"} 2023-07-23 21:10:56,382 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLED in hbase:meta 2023-07-23 21:10:56,384 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot to state=DISABLED 2023-07-23 21:10:56,385 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot in 193 msec 2023-07-23 21:10:56,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-23 21:10:56,498 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot, procId: 91 completed 2023-07-23 21:10:56,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCloneSnapshot 2023-07-23 21:10:56,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:10:56,502 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=94, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:10:56,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot' from rsgroup 'default' 2023-07-23 21:10:56,503 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=94, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:10:56,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:56,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:56,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:56,507 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:56,509 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44/recovered.edits, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44/test] 2023-07-23 21:10:56,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-23 21:10:56,514 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44/recovered.edits/5.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44/recovered.edits/5.seqid 2023-07-23 21:10:56,516 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot/42ba9449174a797921c5780d2ae25c44 2023-07-23 21:10:56,516 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-23 21:10:56,518 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=94, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:10:56,520 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot from hbase:meta 2023-07-23 21:10:56,522 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot' descriptor. 2023-07-23 21:10:56,523 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=94, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:10:56,523 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot' from region states. 2023-07-23 21:10:56,523 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146656523"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:56,524 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 21:10:56,524 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 42ba9449174a797921c5780d2ae25c44, NAME => 'Group_testCloneSnapshot,,1690146654542.42ba9449174a797921c5780d2ae25c44.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 21:10:56,524 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot' as deleted. 2023-07-23 21:10:56,525 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146656524"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:56,526 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot state from META 2023-07-23 21:10:56,528 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=94, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:10:56,529 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot in 29 msec 2023-07-23 21:10:56,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-23 21:10:56,613 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot, procId: 94 completed 2023-07-23 21:10:56,614 INFO [Listener at localhost/38995] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot_clone 2023-07-23 21:10:56,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCloneSnapshot_clone 2023-07-23 21:10:56,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=95, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-23 21:10:56,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=95 2023-07-23 21:10:56,618 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146656618"}]},"ts":"1690146656618"} 2023-07-23 21:10:56,619 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLING in hbase:meta 2023-07-23 21:10:56,621 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot_clone to state=DISABLING 2023-07-23 21:10:56,622 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=c34f5b490309c5d34e478fc247221ea6, UNASSIGN}] 2023-07-23 21:10:56,623 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=96, ppid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=c34f5b490309c5d34e478fc247221ea6, UNASSIGN 2023-07-23 21:10:56,624 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=96 updating hbase:meta row=c34f5b490309c5d34e478fc247221ea6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:56,624 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690146656624"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146656624"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146656624"}]},"ts":"1690146656624"} 2023-07-23 21:10:56,625 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=97, ppid=96, state=RUNNABLE; CloseRegionProcedure c34f5b490309c5d34e478fc247221ea6, server=jenkins-hbase4.apache.org,42727,1690146641774}] 2023-07-23 21:10:56,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=95 2023-07-23 21:10:56,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c34f5b490309c5d34e478fc247221ea6 2023-07-23 21:10:56,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c34f5b490309c5d34e478fc247221ea6, disabling compactions & flushes 2023-07-23 21:10:56,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. 2023-07-23 21:10:56,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. 2023-07-23 21:10:56,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. after waiting 0 ms 2023-07-23 21:10:56,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. 2023-07-23 21:10:56,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/default/Group_testCloneSnapshot_clone/c34f5b490309c5d34e478fc247221ea6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:56,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6. 2023-07-23 21:10:56,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c34f5b490309c5d34e478fc247221ea6: 2023-07-23 21:10:56,785 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c34f5b490309c5d34e478fc247221ea6 2023-07-23 21:10:56,785 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=96 updating hbase:meta row=c34f5b490309c5d34e478fc247221ea6, regionState=CLOSED 2023-07-23 21:10:56,786 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690146656785"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146656785"}]},"ts":"1690146656785"} 2023-07-23 21:10:56,789 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=97, resume processing ppid=96 2023-07-23 21:10:56,789 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, ppid=96, state=SUCCESS; CloseRegionProcedure c34f5b490309c5d34e478fc247221ea6, server=jenkins-hbase4.apache.org,42727,1690146641774 in 162 msec 2023-07-23 21:10:56,790 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-23 21:10:56,790 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=c34f5b490309c5d34e478fc247221ea6, UNASSIGN in 167 msec 2023-07-23 21:10:56,791 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146656791"}]},"ts":"1690146656791"} 2023-07-23 21:10:56,792 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLED in hbase:meta 2023-07-23 21:10:56,794 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot_clone to state=DISABLED 2023-07-23 21:10:56,797 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=95, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone in 182 msec 2023-07-23 21:10:56,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=95 2023-07-23 21:10:56,920 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot_clone, procId: 95 completed 2023-07-23 21:10:56,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCloneSnapshot_clone 2023-07-23 21:10:56,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=98, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-23 21:10:56,923 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=98, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-23 21:10:56,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot_clone' from rsgroup 'default' 2023-07-23 21:10:56,924 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=98, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-23 21:10:56,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:56,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:56,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:56,928 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot_clone/c34f5b490309c5d34e478fc247221ea6 2023-07-23 21:10:56,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=98 2023-07-23 21:10:56,930 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot_clone/c34f5b490309c5d34e478fc247221ea6/recovered.edits, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot_clone/c34f5b490309c5d34e478fc247221ea6/test] 2023-07-23 21:10:56,934 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot_clone/c34f5b490309c5d34e478fc247221ea6/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/default/Group_testCloneSnapshot_clone/c34f5b490309c5d34e478fc247221ea6/recovered.edits/4.seqid 2023-07-23 21:10:56,936 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/default/Group_testCloneSnapshot_clone/c34f5b490309c5d34e478fc247221ea6 2023-07-23 21:10:56,936 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot_clone regions 2023-07-23 21:10:56,939 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=98, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-23 21:10:56,940 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot_clone from hbase:meta 2023-07-23 21:10:56,942 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot_clone' descriptor. 2023-07-23 21:10:56,943 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=98, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-23 21:10:56,943 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot_clone' from region states. 2023-07-23 21:10:56,943 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146656943"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:56,945 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 21:10:56,945 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c34f5b490309c5d34e478fc247221ea6, NAME => 'Group_testCloneSnapshot_clone,,1690146654542.c34f5b490309c5d34e478fc247221ea6.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 21:10:56,945 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot_clone' as deleted. 2023-07-23 21:10:56,945 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146656945"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:56,949 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot_clone state from META 2023-07-23 21:10:56,952 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=98, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-23 21:10:56,952 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=98, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone in 31 msec 2023-07-23 21:10:57,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=98 2023-07-23 21:10:57,031 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot_clone, procId: 98 completed 2023-07-23 21:10:57,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:57,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:57,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:57,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:57,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:57,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:57,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:57,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:57,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:57,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:57,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:57,045 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:57,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:57,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:57,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:57,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:57,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:57,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:57,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:57,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:57,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:57,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 568 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147857062, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:57,063 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:57,065 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:57,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:57,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:57,067 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:57,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:57,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:57,090 INFO [Listener at localhost/38995] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=513 (was 507) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-838552584_17 at /127.0.0.1:60338 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase4.apache.org,45637,1690146645550' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: (jenkins-hbase4.apache.org,35573,1690146639994)-proc-coordinator-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase4.apache.org,46485,1690146642211' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-838552584_17 at /127.0.0.1:48846 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase4.apache.org,42727,1690146641774' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase4.apache.org,42335,1690146647320' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1952713836_17 at /127.0.0.1:50156 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x17f15d9f-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=793 (was 792) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=480 (was 480), ProcessCount=173 (was 173), AvailableMemoryMB=7891 (was 7939) 2023-07-23 21:10:57,090 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-23 21:10:57,108 INFO [Listener at localhost/38995] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=513, OpenFileDescriptor=793, MaxFileDescriptor=60000, SystemLoadAverage=480, ProcessCount=173, AvailableMemoryMB=7891 2023-07-23 21:10:57,108 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-23 21:10:57,108 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(132): testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:57,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:57,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:57,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:57,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:57,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:57,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:57,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:57,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:57,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:57,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:57,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:57,124 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:57,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:57,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:57,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:57,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:57,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:57,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:57,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:57,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:57,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:57,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 596 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147857134, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:57,135 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:57,136 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:57,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:57,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:57,137 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:57,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:57,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:57,138 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBasics(141): testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:57,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:57,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:57,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup appInfo 2023-07-23 21:10:57,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:57,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:57,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-23 21:10:57,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:57,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:57,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:57,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:57,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42335] to rsgroup appInfo 2023-07-23 21:10:57,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:57,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:57,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-23 21:10:57,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:57,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 21:10:57,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42335,1690146647320] are moved back to default 2023-07-23 21:10:57,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-23 21:10:57,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:57,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:57,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:57,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=appInfo 2023-07-23 21:10:57,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:57,171 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-23 21:10:57,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.ServerManager(636): Server jenkins-hbase4.apache.org,42335,1690146647320 added to draining server list. 2023-07-23 21:10:57,173 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/draining/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:57,175 WARN [zk-event-processor-pool-0] master.ServerManager(632): Server jenkins-hbase4.apache.org,42335,1690146647320 is already in the draining server list.Ignoring request to add it again. 2023-07-23 21:10:57,175 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(92): Draining RS node created, adding to list [jenkins-hbase4.apache.org,42335,1690146647320] 2023-07-23 21:10:57,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_ns', hbase.rsgroup.name => 'appInfo'} 2023-07-23 21:10:57,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=99, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_ns 2023-07-23 21:10:57,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=99 2023-07-23 21:10:57,186 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:57,189 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=99, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns in 12 msec 2023-07-23 21:10:57,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=99 2023-07-23 21:10:57,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:57,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:57,286 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:57,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 100 2023-07-23 21:10:57,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-23 21:10:57,300 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=100, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers exec-time=16 msec 2023-07-23 21:10:57,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-23 21:10:57,391 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 100 failed with No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to 2023-07-23 21:10:57,391 DEBUG [Listener at localhost/38995] rsgroup.TestRSGroupsBasics(162): create table error org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to at java.lang.Thread.getStackTrace(Thread.java:1564) at org.apache.hadoop.hbase.util.FutureUtils.setStackTrace(FutureUtils.java:130) at org.apache.hadoop.hbase.util.FutureUtils.rethrow(FutureUtils.java:149) at org.apache.hadoop.hbase.util.FutureUtils.get(FutureUtils.java:186) at org.apache.hadoop.hbase.client.Admin.createTable(Admin.java:302) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.testCreateWhenRsgroupNoOnlineServers(TestRSGroupsBasics.java:159) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) at --------Future.get--------(Unknown Source) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.validateRSGroup(RSGroupAdminEndpoint.java:540) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.moveTableToValidRSGroup(RSGroupAdminEndpoint.java:529) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateTableAction(RSGroupAdminEndpoint.java:501) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:371) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateTableAction(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.preCreate(CreateTableProcedure.java:267) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:93) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-23 21:10:57,398 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/draining/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:57,398 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-23 21:10:57,398 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(109): Draining RS node deleted, removing from list [jenkins-hbase4.apache.org,42335,1690146647320] 2023-07-23 21:10:57,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:57,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:57,404 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=101, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:57,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 101 2023-07-23 21:10:57,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-23 21:10:57,406 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:57,406 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:57,407 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-23 21:10:57,407 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:57,409 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=101, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:57,411 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:57,412 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff empty. 2023-07-23 21:10:57,412 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:57,412 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-23 21:10:57,427 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:57,429 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0943da4eb2875edc91898bb353a962ff, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:10:57,443 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:57,443 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1604): Closing 0943da4eb2875edc91898bb353a962ff, disabling compactions & flushes 2023-07-23 21:10:57,443 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. 2023-07-23 21:10:57,443 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. 2023-07-23 21:10:57,443 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. after waiting 0 ms 2023-07-23 21:10:57,443 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. 2023-07-23 21:10:57,443 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. 2023-07-23 21:10:57,443 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1558): Region close journal for 0943da4eb2875edc91898bb353a962ff: 2023-07-23 21:10:57,448 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=101, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:57,449 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146657449"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146657449"}]},"ts":"1690146657449"} 2023-07-23 21:10:57,451 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:57,452 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=101, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:57,452 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146657452"}]},"ts":"1690146657452"} 2023-07-23 21:10:57,453 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLING in hbase:meta 2023-07-23 21:10:57,458 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0943da4eb2875edc91898bb353a962ff, ASSIGN}] 2023-07-23 21:10:57,461 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0943da4eb2875edc91898bb353a962ff, ASSIGN 2023-07-23 21:10:57,463 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0943da4eb2875edc91898bb353a962ff, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42335,1690146647320; forceNewPlan=false, retain=false 2023-07-23 21:10:57,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-23 21:10:57,615 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=102 updating hbase:meta row=0943da4eb2875edc91898bb353a962ff, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:57,615 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146657615"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146657615"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146657615"}]},"ts":"1690146657615"} 2023-07-23 21:10:57,620 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=102, state=RUNNABLE; OpenRegionProcedure 0943da4eb2875edc91898bb353a962ff, server=jenkins-hbase4.apache.org,42335,1690146647320}] 2023-07-23 21:10:57,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-23 21:10:57,785 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. 2023-07-23 21:10:57,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0943da4eb2875edc91898bb353a962ff, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:57,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testCreateWhenRsgroupNoOnlineServers 0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:57,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:57,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:57,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:57,788 INFO [StoreOpener-0943da4eb2875edc91898bb353a962ff-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:57,790 DEBUG [StoreOpener-0943da4eb2875edc91898bb353a962ff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff/f 2023-07-23 21:10:57,790 DEBUG [StoreOpener-0943da4eb2875edc91898bb353a962ff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff/f 2023-07-23 21:10:57,791 INFO [StoreOpener-0943da4eb2875edc91898bb353a962ff-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0943da4eb2875edc91898bb353a962ff columnFamilyName f 2023-07-23 21:10:57,792 INFO [StoreOpener-0943da4eb2875edc91898bb353a962ff-1] regionserver.HStore(310): Store=0943da4eb2875edc91898bb353a962ff/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:57,792 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:57,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:57,797 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:57,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:57,804 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0943da4eb2875edc91898bb353a962ff; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11689570880, jitterRate=0.08867612481117249}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:57,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0943da4eb2875edc91898bb353a962ff: 2023-07-23 21:10:57,806 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff., pid=103, masterSystemTime=1690146657772 2023-07-23 21:10:57,808 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. 2023-07-23 21:10:57,808 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. 2023-07-23 21:10:57,809 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=102 updating hbase:meta row=0943da4eb2875edc91898bb353a962ff, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:57,809 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146657809"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146657809"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146657809"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146657809"}]},"ts":"1690146657809"} 2023-07-23 21:10:57,816 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=102 2023-07-23 21:10:57,816 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=102, state=SUCCESS; OpenRegionProcedure 0943da4eb2875edc91898bb353a962ff, server=jenkins-hbase4.apache.org,42335,1690146647320 in 194 msec 2023-07-23 21:10:57,818 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-23 21:10:57,818 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0943da4eb2875edc91898bb353a962ff, ASSIGN in 359 msec 2023-07-23 21:10:57,818 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=101, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:57,818 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146657818"}]},"ts":"1690146657818"} 2023-07-23 21:10:57,820 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLED in hbase:meta 2023-07-23 21:10:57,823 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=101, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:57,824 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 422 msec 2023-07-23 21:10:58,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-23 21:10:58,009 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 101 completed 2023-07-23 21:10:58,010 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:58,015 INFO [Listener at localhost/38995] client.HBaseAdmin$15(890): Started disable of Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:58,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:58,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:58,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-23 21:10:58,019 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146658019"}]},"ts":"1690146658019"} 2023-07-23 21:10:58,020 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLING in hbase:meta 2023-07-23 21:10:58,022 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLING 2023-07-23 21:10:58,023 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0943da4eb2875edc91898bb353a962ff, UNASSIGN}] 2023-07-23 21:10:58,025 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0943da4eb2875edc91898bb353a962ff, UNASSIGN 2023-07-23 21:10:58,026 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=0943da4eb2875edc91898bb353a962ff, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:58,026 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146658026"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146658026"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146658026"}]},"ts":"1690146658026"} 2023-07-23 21:10:58,027 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=105, state=RUNNABLE; CloseRegionProcedure 0943da4eb2875edc91898bb353a962ff, server=jenkins-hbase4.apache.org,42335,1690146647320}] 2023-07-23 21:10:58,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-23 21:10:58,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:58,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0943da4eb2875edc91898bb353a962ff, disabling compactions & flushes 2023-07-23 21:10:58,180 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. 2023-07-23 21:10:58,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. 2023-07-23 21:10:58,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. after waiting 0 ms 2023-07-23 21:10:58,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. 2023-07-23 21:10:58,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:58,185 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff. 2023-07-23 21:10:58,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0943da4eb2875edc91898bb353a962ff: 2023-07-23 21:10:58,186 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:58,187 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=0943da4eb2875edc91898bb353a962ff, regionState=CLOSED 2023-07-23 21:10:58,187 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146658187"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146658187"}]},"ts":"1690146658187"} 2023-07-23 21:10:58,190 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=105 2023-07-23 21:10:58,190 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=105, state=SUCCESS; CloseRegionProcedure 0943da4eb2875edc91898bb353a962ff, server=jenkins-hbase4.apache.org,42335,1690146647320 in 161 msec 2023-07-23 21:10:58,191 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-23 21:10:58,191 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0943da4eb2875edc91898bb353a962ff, UNASSIGN in 167 msec 2023-07-23 21:10:58,192 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146658192"}]},"ts":"1690146658192"} 2023-07-23 21:10:58,193 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLED in hbase:meta 2023-07-23 21:10:58,195 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLED 2023-07-23 21:10:58,196 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 181 msec 2023-07-23 21:10:58,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-23 21:10:58,321 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 104 completed 2023-07-23 21:10:58,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:58,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:58,324 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:58,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from rsgroup 'appInfo' 2023-07-23 21:10:58,325 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=107, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:58,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:58,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:58,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-23 21:10:58,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:58,329 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:58,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-23 21:10:58,331 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff/f, FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff/recovered.edits] 2023-07-23 21:10:58,336 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff/recovered.edits/4.seqid to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff/recovered.edits/4.seqid 2023-07-23 21:10:58,337 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0943da4eb2875edc91898bb353a962ff 2023-07-23 21:10:58,337 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-23 21:10:58,339 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=107, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:58,341 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_ns:testCreateWhenRsgroupNoOnlineServers from hbase:meta 2023-07-23 21:10:58,343 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' descriptor. 2023-07-23 21:10:58,344 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=107, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:58,344 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from region states. 2023-07-23 21:10:58,344 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146658344"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:58,345 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 21:10:58,345 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 0943da4eb2875edc91898bb353a962ff, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690146657401.0943da4eb2875edc91898bb353a962ff.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 21:10:58,345 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_ns:testCreateWhenRsgroupNoOnlineServers' as deleted. 2023-07-23 21:10:58,345 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146658345"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:58,347 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_ns:testCreateWhenRsgroupNoOnlineServers state from META 2023-07-23 21:10:58,349 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=107, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:10:58,350 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 28 msec 2023-07-23 21:10:58,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-23 21:10:58,431 INFO [Listener at localhost/38995] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 107 completed 2023-07-23 21:10:58,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_ns 2023-07-23 21:10:58,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] procedure2.ProcedureExecutor(1029): Stored pid=108, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-23 21:10:58,437 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=108, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-23 21:10:58,439 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=108, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-23 21:10:58,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-23 21:10:58,441 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=108, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-23 21:10:58,442 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_ns 2023-07-23 21:10:58,443 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:58,443 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=108, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-23 21:10:58,445 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=108, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-23 21:10:58,446 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns in 11 msec 2023-07-23 21:10:58,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-23 21:10:58,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:58,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:58,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:58,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:58,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:58,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:58,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:58,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:58,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:58,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-23 21:10:58,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 21:10:58,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:58,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:58,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:58,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:58,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42335] to rsgroup default 2023-07-23 21:10:58,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:58,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-23 21:10:58,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:58,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-23 21:10:58,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42335,1690146647320] are moved back to appInfo 2023-07-23 21:10:58,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-23 21:10:58,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:58,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup appInfo 2023-07-23 21:10:58,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:58,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:58,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:58,570 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:58,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:58,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:58,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:58,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:58,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:58,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:58,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:58,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:58,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:58,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 698 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147858582, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:58,583 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:58,584 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:58,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:58,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:58,585 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:58,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:58,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:58,602 INFO [Listener at localhost/38995] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=513 (was 513), OpenFileDescriptor=791 (was 793), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=466 (was 480), ProcessCount=173 (was 173), AvailableMemoryMB=7892 (was 7891) - AvailableMemoryMB LEAK? - 2023-07-23 21:10:58,602 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-23 21:10:58,617 INFO [Listener at localhost/38995] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=513, OpenFileDescriptor=791, MaxFileDescriptor=60000, SystemLoadAverage=466, ProcessCount=173, AvailableMemoryMB=7892 2023-07-23 21:10:58,617 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-23 21:10:58,618 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(132): testBasicStartUp 2023-07-23 21:10:58,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:58,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:58,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:58,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:58,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:58,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:58,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:58,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:58,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:58,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:58,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:58,632 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:58,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:58,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:58,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:58,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:58,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:58,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:58,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:58,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:58,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:58,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 726 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147858642, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:58,643 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:58,644 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:58,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:58,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:58,645 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:58,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:58,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:58,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:58,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:58,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:58,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:58,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:58,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:58,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:58,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:58,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:58,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:58,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:58,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:58,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:58,660 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:58,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:58,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:58,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:58,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:58,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:58,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:58,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:58,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:58,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:58,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 756 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147858669, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:58,670 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:58,672 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:58,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:58,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:58,672 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:58,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:58,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:58,689 INFO [Listener at localhost/38995] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=514 (was 513) Potentially hanging thread: hconnection-0x38dc196c-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=791 (was 791), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=466 (was 466), ProcessCount=173 (was 173), AvailableMemoryMB=7892 (was 7892) 2023-07-23 21:10:58,690 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-23 21:10:58,705 INFO [Listener at localhost/38995] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=514, OpenFileDescriptor=791, MaxFileDescriptor=60000, SystemLoadAverage=466, ProcessCount=173, AvailableMemoryMB=7892 2023-07-23 21:10:58,705 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-23 21:10:58,705 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(132): testRSGroupsWithHBaseQuota 2023-07-23 21:10:58,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:58,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:58,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:58,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:58,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:58,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:58,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:58,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:58,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:58,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:58,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:58,718 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:58,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:58,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:58,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:58,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:58,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:58,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:58,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:58,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35573] to rsgroup master 2023-07-23 21:10:58,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:58,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] ipc.CallRunner(144): callId: 784 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46040 deadline: 1690147858733, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. 2023-07-23 21:10:58,734 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor65.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35573 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:58,735 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:58,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:58,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:58,736 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:42335, jenkins-hbase4.apache.org:42727, jenkins-hbase4.apache.org:45637, jenkins-hbase4.apache.org:46485], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:58,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:58,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:58,737 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-23 21:10:58,737 INFO [Listener at localhost/38995] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 21:10:58,738 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4b022e64 to 127.0.0.1:59847 2023-07-23 21:10:58,738 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:58,738 DEBUG [Listener at localhost/38995] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 21:10:58,738 DEBUG [Listener at localhost/38995] util.JVMClusterUtil(257): Found active master hash=707253195, stopped=false 2023-07-23 21:10:58,739 DEBUG [Listener at localhost/38995] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 21:10:58,739 DEBUG [Listener at localhost/38995] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 21:10:58,739 INFO [Listener at localhost/38995] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35573,1690146639994 2023-07-23 21:10:58,740 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:58,740 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:58,740 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:58,740 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:58,740 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:58,740 INFO [Listener at localhost/38995] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 21:10:58,740 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:58,740 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:58,740 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:58,740 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:58,741 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:58,741 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:58,741 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0a94471e to 127.0.0.1:59847 2023-07-23 21:10:58,741 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:58,742 INFO [Listener at localhost/38995] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42727,1690146641774' ***** 2023-07-23 21:10:58,742 INFO [Listener at localhost/38995] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:10:58,742 INFO [Listener at localhost/38995] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46485,1690146642211' ***** 2023-07-23 21:10:58,742 INFO [Listener at localhost/38995] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:10:58,742 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:58,742 INFO [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:58,742 INFO [Listener at localhost/38995] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45637,1690146645550' ***** 2023-07-23 21:10:58,742 INFO [Listener at localhost/38995] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:10:58,743 INFO [Listener at localhost/38995] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42335,1690146647320' ***** 2023-07-23 21:10:58,743 INFO [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:58,748 INFO [Listener at localhost/38995] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:10:58,752 INFO [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:58,757 INFO [RS:3;jenkins-hbase4:45637] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4c1ac46c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:58,757 INFO [RS:2;jenkins-hbase4:46485] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@731442c6{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:58,757 INFO [RS:4;jenkins-hbase4:42335] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1835fc4c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:58,757 INFO [RS:0;jenkins-hbase4:42727] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@431a391d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:58,757 INFO [RS:3;jenkins-hbase4:45637] server.AbstractConnector(383): Stopped ServerConnector@4c813439{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:58,757 INFO [RS:2;jenkins-hbase4:46485] server.AbstractConnector(383): Stopped ServerConnector@f66c3ee{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:58,757 INFO [RS:0;jenkins-hbase4:42727] server.AbstractConnector(383): Stopped ServerConnector@37d22d71{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:58,757 INFO [RS:4;jenkins-hbase4:42335] server.AbstractConnector(383): Stopped ServerConnector@1d9f35c9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:58,757 INFO [RS:3;jenkins-hbase4:45637] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:58,758 INFO [RS:4;jenkins-hbase4:42335] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:58,758 INFO [RS:0;jenkins-hbase4:42727] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:58,758 INFO [RS:2;jenkins-hbase4:46485] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:58,759 INFO [RS:4;jenkins-hbase4:42335] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@691a3c28{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:58,758 INFO [RS:3;jenkins-hbase4:45637] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@15a0a997{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:58,761 INFO [RS:2;jenkins-hbase4:46485] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@367a968b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:58,761 INFO [RS:3;jenkins-hbase4:45637] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f0e7505{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:58,762 INFO [RS:2;jenkins-hbase4:46485] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5769bd85{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:58,760 INFO [RS:0;jenkins-hbase4:42727] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2e7996f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:58,761 INFO [RS:4;jenkins-hbase4:42335] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5d44147d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:58,763 INFO [RS:0;jenkins-hbase4:42727] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@488a8507{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:58,763 INFO [RS:0;jenkins-hbase4:42727] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:10:58,764 INFO [RS:2;jenkins-hbase4:46485] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:10:58,764 INFO [RS:3;jenkins-hbase4:45637] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:10:58,764 INFO [RS:0;jenkins-hbase4:42727] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:10:58,764 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:10:58,764 INFO [RS:2;jenkins-hbase4:46485] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:10:58,764 INFO [RS:0;jenkins-hbase4:42727] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:10:58,764 INFO [RS:4;jenkins-hbase4:42335] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:10:58,764 INFO [RS:3;jenkins-hbase4:45637] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:10:58,764 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:10:58,764 INFO [RS:3;jenkins-hbase4:45637] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:10:58,764 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(3305): Received CLOSE for cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:10:58,764 INFO [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(3305): Received CLOSE for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:58,764 INFO [RS:4;jenkins-hbase4:42335] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:10:58,764 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:10:58,764 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:10:58,764 INFO [RS:2;jenkins-hbase4:46485] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:10:58,765 INFO [RS:4;jenkins-hbase4:42335] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:10:58,765 INFO [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:58,765 INFO [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:58,765 DEBUG [RS:2;jenkins-hbase4:46485] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x10bde4a4 to 127.0.0.1:59847 2023-07-23 21:10:58,765 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:58,765 INFO [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:58,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cfdae6c1dde0d9be1f26f623634660ba, disabling compactions & flushes 2023-07-23 21:10:58,766 DEBUG [RS:3;jenkins-hbase4:45637] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0633ec82 to 127.0.0.1:59847 2023-07-23 21:10:58,766 DEBUG [RS:0;jenkins-hbase4:42727] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x255efbb8 to 127.0.0.1:59847 2023-07-23 21:10:58,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 674d6b4e3c5d6a4f0860e9c874b3e183, disabling compactions & flushes 2023-07-23 21:10:58,766 DEBUG [RS:2;jenkins-hbase4:46485] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:58,765 DEBUG [RS:4;jenkins-hbase4:42335] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1088b565 to 127.0.0.1:59847 2023-07-23 21:10:58,767 INFO [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46485,1690146642211; all regions closed. 2023-07-23 21:10:58,767 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:58,767 DEBUG [RS:0;jenkins-hbase4:42727] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:58,766 DEBUG [RS:3;jenkins-hbase4:45637] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:58,766 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:10:58,767 INFO [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 21:10:58,767 INFO [RS:0;jenkins-hbase4:42727] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:10:58,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:58,767 DEBUG [RS:4;jenkins-hbase4:42335] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:58,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. after waiting 0 ms 2023-07-23 21:10:58,767 INFO [RS:0;jenkins-hbase4:42727] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:10:58,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:58,767 DEBUG [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1478): Online Regions={674d6b4e3c5d6a4f0860e9c874b3e183=hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.} 2023-07-23 21:10:58,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:10:58,767 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 674d6b4e3c5d6a4f0860e9c874b3e183 1/1 column families, dataSize=15.26 KB heapSize=24.78 KB 2023-07-23 21:10:58,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. after waiting 0 ms 2023-07-23 21:10:58,768 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:10:58,768 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing cfdae6c1dde0d9be1f26f623634660ba 1/1 column families, dataSize=365 B heapSize=1.13 KB 2023-07-23 21:10:58,767 INFO [RS:0;jenkins-hbase4:42727] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:10:58,767 INFO [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42335,1690146647320; all regions closed. 2023-07-23 21:10:58,769 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 21:10:58,768 DEBUG [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1504): Waiting on 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:10:58,771 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-23 21:10:58,771 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1478): Online Regions={cfdae6c1dde0d9be1f26f623634660ba=hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba., 1588230740=hbase:meta,,1.1588230740} 2023-07-23 21:10:58,773 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 21:10:58,773 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 21:10:58,773 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 21:10:58,773 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 21:10:58,773 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 21:10:58,773 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=46.70 KB heapSize=75.23 KB 2023-07-23 21:10:58,773 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1504): Waiting on 1588230740, cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:10:58,778 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:58,778 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:58,778 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:58,778 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:58,787 DEBUG [RS:2;jenkins-hbase4:46485] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:10:58,787 INFO [RS:2;jenkins-hbase4:46485] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46485%2C1690146642211:(num 1690146644042) 2023-07-23 21:10:58,787 DEBUG [RS:2;jenkins-hbase4:46485] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:58,787 INFO [RS:2;jenkins-hbase4:46485] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:58,788 DEBUG [RS:4;jenkins-hbase4:42335] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:10:58,788 INFO [RS:4;jenkins-hbase4:42335] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42335%2C1690146647320:(num 1690146647681) 2023-07-23 21:10:58,788 DEBUG [RS:4;jenkins-hbase4:42335] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:58,788 INFO [RS:4;jenkins-hbase4:42335] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:58,788 INFO [RS:2;jenkins-hbase4:46485] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:58,789 INFO [RS:4;jenkins-hbase4:42335] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:58,790 INFO [RS:2;jenkins-hbase4:46485] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:10:58,790 INFO [RS:4;jenkins-hbase4:42335] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:10:58,790 INFO [RS:4;jenkins-hbase4:42335] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:10:58,790 INFO [RS:4;jenkins-hbase4:42335] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:10:58,790 INFO [RS:2;jenkins-hbase4:46485] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:10:58,790 INFO [RS:2;jenkins-hbase4:46485] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:10:58,790 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:58,790 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:58,792 INFO [RS:2;jenkins-hbase4:46485] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46485 2023-07-23 21:10:58,791 INFO [RS:4;jenkins-hbase4:42335] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42335 2023-07-23 21:10:58,818 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-23 21:10:58,818 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-23 21:10:58,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=15.26 KB at sequenceid=73 (bloomFilter=true), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/.tmp/m/ef999fa06b66465f978c7309df40e37f 2023-07-23 21:10:58,823 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=365 B at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/.tmp/info/39241fc32c9441b98ec8f405a6015e4c 2023-07-23 21:10:58,833 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef999fa06b66465f978c7309df40e37f 2023-07-23 21:10:58,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/.tmp/m/ef999fa06b66465f978c7309df40e37f as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/ef999fa06b66465f978c7309df40e37f 2023-07-23 21:10:58,839 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=41.06 KB at sequenceid=138 (bloomFilter=false), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/info/15c38b0ea71c46adb63b62b92a154f8d 2023-07-23 21:10:58,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 39241fc32c9441b98ec8f405a6015e4c 2023-07-23 21:10:58,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/.tmp/info/39241fc32c9441b98ec8f405a6015e4c as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/info/39241fc32c9441b98ec8f405a6015e4c 2023-07-23 21:10:58,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef999fa06b66465f978c7309df40e37f 2023-07-23 21:10:58,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/ef999fa06b66465f978c7309df40e37f, entries=21, sequenceid=73, filesize=5.7 K 2023-07-23 21:10:58,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~15.26 KB/15630, heapSize ~24.77 KB/25360, currentSize=0 B/0 for 674d6b4e3c5d6a4f0860e9c874b3e183 in 82ms, sequenceid=73, compaction requested=false 2023-07-23 21:10:58,850 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15c38b0ea71c46adb63b62b92a154f8d 2023-07-23 21:10:58,853 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 39241fc32c9441b98ec8f405a6015e4c 2023-07-23 21:10:58,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/info/39241fc32c9441b98ec8f405a6015e4c, entries=5, sequenceid=11, filesize=5.1 K 2023-07-23 21:10:58,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~365 B/365, heapSize ~1.11 KB/1136, currentSize=0 B/0 for cfdae6c1dde0d9be1f26f623634660ba in 86ms, sequenceid=11, compaction requested=false 2023-07-23 21:10:58,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/recovered.edits/76.seqid, newMaxSeqId=76, maxSeqId=12 2023-07-23 21:10:58,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:10:58,870 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:58,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 674d6b4e3c5d6a4f0860e9c874b3e183: 2023-07-23 21:10:58,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/recovered.edits/14.seqid, newMaxSeqId=14, maxSeqId=1 2023-07-23 21:10:58,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:10:58,871 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:10:58,871 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cfdae6c1dde0d9be1f26f623634660ba: 2023-07-23 21:10:58,871 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:10:58,871 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:58,871 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:58,871 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:58,871 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:58,871 ERROR [Listener at localhost/38995-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@267d7b99 rejected from java.util.concurrent.ThreadPoolExecutor@2242fb8e[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-23 21:10:58,872 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:58,872 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:58,871 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:58,871 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42335,1690146647320 2023-07-23 21:10:58,871 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:58,872 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:58,872 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:58,872 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:58,872 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46485,1690146642211 2023-07-23 21:10:58,873 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-23 21:10:58,873 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-23 21:10:58,874 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42335,1690146647320] 2023-07-23 21:10:58,874 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42335,1690146647320; numProcessing=1 2023-07-23 21:10:58,875 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42335,1690146647320 already deleted, retry=false 2023-07-23 21:10:58,875 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42335,1690146647320 expired; onlineServers=3 2023-07-23 21:10:58,875 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46485,1690146642211] 2023-07-23 21:10:58,875 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46485,1690146642211; numProcessing=2 2023-07-23 21:10:58,876 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46485,1690146642211 already deleted, retry=false 2023-07-23 21:10:58,876 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46485,1690146642211 expired; onlineServers=2 2023-07-23 21:10:58,878 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=138 (bloomFilter=false), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/rep_barrier/27a108a6498540b9881fffee97f83a46 2023-07-23 21:10:58,883 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 27a108a6498540b9881fffee97f83a46 2023-07-23 21:10:58,897 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.91 KB at sequenceid=138 (bloomFilter=false), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/table/b405931d76734326ace1ba7e7a4c97d4 2023-07-23 21:10:58,902 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b405931d76734326ace1ba7e7a4c97d4 2023-07-23 21:10:58,903 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/info/15c38b0ea71c46adb63b62b92a154f8d as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/15c38b0ea71c46adb63b62b92a154f8d 2023-07-23 21:10:58,909 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15c38b0ea71c46adb63b62b92a154f8d 2023-07-23 21:10:58,909 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/15c38b0ea71c46adb63b62b92a154f8d, entries=57, sequenceid=138, filesize=11.1 K 2023-07-23 21:10:58,910 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/rep_barrier/27a108a6498540b9881fffee97f83a46 as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier/27a108a6498540b9881fffee97f83a46 2023-07-23 21:10:58,915 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 27a108a6498540b9881fffee97f83a46 2023-07-23 21:10:58,916 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier/27a108a6498540b9881fffee97f83a46, entries=16, sequenceid=138, filesize=6.7 K 2023-07-23 21:10:58,916 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/table/b405931d76734326ace1ba7e7a4c97d4 as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table/b405931d76734326ace1ba7e7a4c97d4 2023-07-23 21:10:58,922 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b405931d76734326ace1ba7e7a4c97d4 2023-07-23 21:10:58,922 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table/b405931d76734326ace1ba7e7a4c97d4, entries=27, sequenceid=138, filesize=7.1 K 2023-07-23 21:10:58,923 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~46.70 KB/47820, heapSize ~75.18 KB/76984, currentSize=0 B/0 for 1588230740 in 150ms, sequenceid=138, compaction requested=false 2023-07-23 21:10:58,932 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/recovered.edits/141.seqid, newMaxSeqId=141, maxSeqId=1 2023-07-23 21:10:58,932 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:10:58,933 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 21:10:58,933 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 21:10:58,933 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 21:10:58,970 INFO [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45637,1690146645550; all regions closed. 2023-07-23 21:10:58,974 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45637,1690146645550/jenkins-hbase4.apache.org%2C45637%2C1690146645550.1690146645953 not finished, retry = 0 2023-07-23 21:10:58,974 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42727,1690146641774; all regions closed. 2023-07-23 21:10:58,980 DEBUG [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:10:58,980 INFO [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42727%2C1690146641774.meta:.meta(num 1690146644349) 2023-07-23 21:10:58,985 DEBUG [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:10:58,985 INFO [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42727%2C1690146641774:(num 1690146644042) 2023-07-23 21:10:58,985 DEBUG [RS:0;jenkins-hbase4:42727] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:58,985 INFO [RS:0;jenkins-hbase4:42727] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:58,986 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:58,986 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:58,987 INFO [RS:0;jenkins-hbase4:42727] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42727 2023-07-23 21:10:58,989 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:58,989 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:58,989 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42727,1690146641774 2023-07-23 21:10:58,991 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42727,1690146641774] 2023-07-23 21:10:58,991 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42727,1690146641774; numProcessing=3 2023-07-23 21:10:58,992 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42727,1690146641774 already deleted, retry=false 2023-07-23 21:10:58,992 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42727,1690146641774 expired; onlineServers=1 2023-07-23 21:10:59,077 DEBUG [RS:3;jenkins-hbase4:45637] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:10:59,077 INFO [RS:3;jenkins-hbase4:45637] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45637%2C1690146645550:(num 1690146645953) 2023-07-23 21:10:59,077 DEBUG [RS:3;jenkins-hbase4:45637] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:59,077 INFO [RS:3;jenkins-hbase4:45637] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:59,077 INFO [RS:3;jenkins-hbase4:45637] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:59,077 INFO [RS:3;jenkins-hbase4:45637] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:10:59,077 INFO [RS:3;jenkins-hbase4:45637] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:10:59,077 INFO [RS:3;jenkins-hbase4:45637] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:10:59,077 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:59,079 INFO [RS:3;jenkins-hbase4:45637] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45637 2023-07-23 21:10:59,081 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45637,1690146645550 2023-07-23 21:10:59,081 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:59,082 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45637,1690146645550] 2023-07-23 21:10:59,082 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45637,1690146645550; numProcessing=4 2023-07-23 21:10:59,084 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45637,1690146645550 already deleted, retry=false 2023-07-23 21:10:59,084 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45637,1690146645550 expired; onlineServers=0 2023-07-23 21:10:59,084 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35573,1690146639994' ***** 2023-07-23 21:10:59,084 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 21:10:59,085 DEBUG [M:0;jenkins-hbase4:35573] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b1ad9f0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:59,085 INFO [M:0;jenkins-hbase4:35573] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:59,088 INFO [M:0;jenkins-hbase4:35573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@15490b41{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 21:10:59,088 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:59,088 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:59,088 INFO [M:0;jenkins-hbase4:35573] server.AbstractConnector(383): Stopped ServerConnector@4a54f3d5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:59,088 INFO [M:0;jenkins-hbase4:35573] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:59,089 INFO [M:0;jenkins-hbase4:35573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@437bc3bc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:59,089 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:59,090 INFO [M:0;jenkins-hbase4:35573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1bf4d331{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:59,090 INFO [M:0;jenkins-hbase4:35573] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35573,1690146639994 2023-07-23 21:10:59,090 INFO [M:0;jenkins-hbase4:35573] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35573,1690146639994; all regions closed. 2023-07-23 21:10:59,090 DEBUG [M:0;jenkins-hbase4:35573] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:59,090 INFO [M:0;jenkins-hbase4:35573] master.HMaster(1491): Stopping master jetty server 2023-07-23 21:10:59,091 INFO [M:0;jenkins-hbase4:35573] server.AbstractConnector(383): Stopped ServerConnector@6037568c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:59,091 DEBUG [M:0;jenkins-hbase4:35573] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 21:10:59,091 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 21:10:59,091 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146643661] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146643661,5,FailOnTimeoutGroup] 2023-07-23 21:10:59,091 DEBUG [M:0;jenkins-hbase4:35573] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 21:10:59,091 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146643661] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146643661,5,FailOnTimeoutGroup] 2023-07-23 21:10:59,092 INFO [M:0;jenkins-hbase4:35573] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 21:10:59,092 INFO [M:0;jenkins-hbase4:35573] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 21:10:59,092 INFO [M:0;jenkins-hbase4:35573] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-23 21:10:59,092 DEBUG [M:0;jenkins-hbase4:35573] master.HMaster(1512): Stopping service threads 2023-07-23 21:10:59,092 INFO [M:0;jenkins-hbase4:35573] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 21:10:59,092 ERROR [M:0;jenkins-hbase4:35573] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-23 21:10:59,093 INFO [M:0;jenkins-hbase4:35573] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 21:10:59,093 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 21:10:59,093 DEBUG [M:0;jenkins-hbase4:35573] zookeeper.ZKUtil(398): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 21:10:59,093 WARN [M:0;jenkins-hbase4:35573] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 21:10:59,093 INFO [M:0;jenkins-hbase4:35573] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 21:10:59,094 INFO [M:0;jenkins-hbase4:35573] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 21:10:59,094 DEBUG [M:0;jenkins-hbase4:35573] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 21:10:59,094 INFO [M:0;jenkins-hbase4:35573] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:59,094 DEBUG [M:0;jenkins-hbase4:35573] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:59,094 DEBUG [M:0;jenkins-hbase4:35573] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 21:10:59,094 DEBUG [M:0;jenkins-hbase4:35573] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:59,094 INFO [M:0;jenkins-hbase4:35573] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=363.69 KB heapSize=433.31 KB 2023-07-23 21:10:59,111 INFO [M:0;jenkins-hbase4:35573] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=363.69 KB at sequenceid=796 (bloomFilter=true), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6e0eb0a7955c4e2aa2b5d2d2df7904ef 2023-07-23 21:10:59,118 DEBUG [M:0;jenkins-hbase4:35573] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6e0eb0a7955c4e2aa2b5d2d2df7904ef as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6e0eb0a7955c4e2aa2b5d2d2df7904ef 2023-07-23 21:10:59,125 INFO [M:0;jenkins-hbase4:35573] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6e0eb0a7955c4e2aa2b5d2d2df7904ef, entries=108, sequenceid=796, filesize=25.2 K 2023-07-23 21:10:59,125 INFO [M:0;jenkins-hbase4:35573] regionserver.HRegion(2948): Finished flush of dataSize ~363.69 KB/372417, heapSize ~433.30 KB/443696, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=796, compaction requested=false 2023-07-23 21:10:59,127 INFO [M:0;jenkins-hbase4:35573] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:59,127 DEBUG [M:0;jenkins-hbase4:35573] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:10:59,135 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:59,135 INFO [M:0;jenkins-hbase4:35573] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 21:10:59,136 INFO [M:0;jenkins-hbase4:35573] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35573 2023-07-23 21:10:59,137 DEBUG [M:0;jenkins-hbase4:35573] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35573,1690146639994 already deleted, retry=false 2023-07-23 21:10:59,340 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:59,340 INFO [M:0;jenkins-hbase4:35573] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35573,1690146639994; zookeeper connection closed. 2023-07-23 21:10:59,340 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:35573-0x1019405901c0000, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:59,441 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:59,441 INFO [RS:3;jenkins-hbase4:45637] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45637,1690146645550; zookeeper connection closed. 2023-07-23 21:10:59,441 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45637-0x1019405901c000b, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:59,441 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4c5991b8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4c5991b8 2023-07-23 21:10:59,541 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:59,541 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42727,1690146641774; zookeeper connection closed. 2023-07-23 21:10:59,541 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x1019405901c0001, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:59,541 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3a0222b6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3a0222b6 2023-07-23 21:10:59,641 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:59,641 INFO [RS:4;jenkins-hbase4:42335] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42335,1690146647320; zookeeper connection closed. 2023-07-23 21:10:59,641 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42335-0x1019405901c000d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:59,642 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@436b5bd0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@436b5bd0 2023-07-23 21:10:59,741 INFO [RS:2;jenkins-hbase4:46485] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46485,1690146642211; zookeeper connection closed. 2023-07-23 21:10:59,741 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:59,742 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:46485-0x1019405901c0003, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:59,742 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7fdba274] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7fdba274 2023-07-23 21:10:59,742 INFO [Listener at localhost/38995] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-23 21:10:59,742 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-23 21:11:00,166 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:11:00,167 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 21:11:00,167 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 21:11:01,489 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 21:11:01,743 DEBUG [Listener at localhost/38995] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-23 21:11:01,743 DEBUG [Listener at localhost/38995] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-23 21:11:01,743 DEBUG [Listener at localhost/38995] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-23 21:11:01,744 DEBUG [Listener at localhost/38995] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-23 21:11:01,745 INFO [Listener at localhost/38995] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:01,745 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:01,745 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:01,745 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:01,745 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:01,745 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:01,746 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:01,746 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38577 2023-07-23 21:11:01,748 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:01,749 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:01,750 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38577 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:11:01,754 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:385770x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:01,755 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38577-0x1019405901c0010 connected 2023-07-23 21:11:01,757 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:01,758 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:01,758 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:01,761 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38577 2023-07-23 21:11:01,762 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38577 2023-07-23 21:11:01,762 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38577 2023-07-23 21:11:01,766 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38577 2023-07-23 21:11:01,766 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38577 2023-07-23 21:11:01,768 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:01,769 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:01,769 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:01,769 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 21:11:01,769 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:01,769 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:01,769 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:01,770 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 43547 2023-07-23 21:11:01,770 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:01,779 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:01,779 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@67c6a36d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:01,779 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:01,779 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5e3418f0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:01,903 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:01,905 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:01,905 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:01,905 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:11:01,906 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:01,908 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2505c556{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-43547-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2769887508621897625/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 21:11:01,909 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@7071e77c{HTTP/1.1, (http/1.1)}{0.0.0.0:43547} 2023-07-23 21:11:01,910 INFO [Listener at localhost/38995] server.Server(415): Started @27710ms 2023-07-23 21:11:01,910 INFO [Listener at localhost/38995] master.HMaster(444): hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914, hbase.cluster.distributed=false 2023-07-23 21:11:01,912 DEBUG [pool-345-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-23 21:11:01,924 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:01,924 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:01,924 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:01,924 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:01,924 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:01,924 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:01,925 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:01,925 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38927 2023-07-23 21:11:01,926 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:11:01,927 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:11:01,928 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:01,930 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:01,931 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38927 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:11:01,934 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:389270x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:01,935 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:389270x0, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:01,936 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:389270x0, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:01,936 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:389270x0, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:01,938 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38927-0x1019405901c0011 connected 2023-07-23 21:11:01,942 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38927 2023-07-23 21:11:01,943 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38927 2023-07-23 21:11:01,943 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38927 2023-07-23 21:11:01,944 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38927 2023-07-23 21:11:01,946 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38927 2023-07-23 21:11:01,948 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:01,948 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:01,948 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:01,949 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:11:01,949 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:01,949 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:01,949 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:01,950 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 44037 2023-07-23 21:11:01,950 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:01,951 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:01,952 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@46d1555c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:01,952 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:01,952 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@37264fd4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:02,074 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:02,075 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:02,075 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:02,075 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:11:02,076 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:02,077 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@798231e4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-44037-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3018332523139774839/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:02,078 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@435a4063{HTTP/1.1, (http/1.1)}{0.0.0.0:44037} 2023-07-23 21:11:02,078 INFO [Listener at localhost/38995] server.Server(415): Started @27878ms 2023-07-23 21:11:02,090 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:02,091 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:02,091 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:02,091 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:02,091 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:02,091 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:02,091 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:02,092 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42175 2023-07-23 21:11:02,092 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:11:02,093 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:11:02,094 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:02,095 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:02,097 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42175 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:11:02,102 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:421750x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:02,104 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:421750x0, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:02,104 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42175-0x1019405901c0012 connected 2023-07-23 21:11:02,105 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:02,106 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:02,106 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42175 2023-07-23 21:11:02,110 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42175 2023-07-23 21:11:02,114 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42175 2023-07-23 21:11:02,115 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42175 2023-07-23 21:11:02,115 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42175 2023-07-23 21:11:02,117 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:02,117 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:02,117 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:02,117 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:11:02,117 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:02,118 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:02,118 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:02,118 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 38177 2023-07-23 21:11:02,118 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:02,120 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:02,120 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5459ff94{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:02,121 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:02,121 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@62436ad0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:02,285 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:02,286 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:02,287 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:02,287 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:11:02,288 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:02,289 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6b2e4cf8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-38177-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2764520130449750091/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:02,291 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@37fc8d1b{HTTP/1.1, (http/1.1)}{0.0.0.0:38177} 2023-07-23 21:11:02,291 INFO [Listener at localhost/38995] server.Server(415): Started @28091ms 2023-07-23 21:11:02,308 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:02,308 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:02,308 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:02,309 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:02,309 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:02,309 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:02,309 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:02,310 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39795 2023-07-23 21:11:02,310 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:11:02,312 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:11:02,312 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:02,313 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:02,314 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39795 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:11:02,324 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:397950x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:02,325 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:397950x0, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:02,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39795-0x1019405901c0013 connected 2023-07-23 21:11:02,326 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:02,327 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:02,330 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39795 2023-07-23 21:11:02,330 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39795 2023-07-23 21:11:02,334 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39795 2023-07-23 21:11:02,334 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39795 2023-07-23 21:11:02,335 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39795 2023-07-23 21:11:02,337 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:02,337 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:02,337 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:02,338 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:11:02,338 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:02,338 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:02,339 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:02,339 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 36983 2023-07-23 21:11:02,340 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:02,346 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:02,346 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@58f5ae2a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:02,346 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:02,347 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4134ac44{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:02,461 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:02,462 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:02,462 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:02,462 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:11:02,464 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:02,465 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3153ec14{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-36983-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4498864108735804820/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:02,466 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@44b83e33{HTTP/1.1, (http/1.1)}{0.0.0.0:36983} 2023-07-23 21:11:02,466 INFO [Listener at localhost/38995] server.Server(415): Started @28266ms 2023-07-23 21:11:02,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:02,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@264618d7{HTTP/1.1, (http/1.1)}{0.0.0.0:32789} 2023-07-23 21:11:02,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @28272ms 2023-07-23 21:11:02,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:02,473 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 21:11:02,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:02,475 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:02,475 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:02,475 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:02,475 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:02,476 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:02,476 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:11:02,478 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,38577,1690146661744 from backup master directory 2023-07-23 21:11:02,478 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:11:02,480 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:02,480 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 21:11:02,481 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:02,481 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:02,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:02,527 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x308e224c to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:02,533 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d3c8b7d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:02,534 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:11:02,534 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 21:11:02,535 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:02,543 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,35573,1690146639994 to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,35573,1690146639994-dead as it is dead 2023-07-23 21:11:02,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,35573,1690146639994-dead/jenkins-hbase4.apache.org%2C35573%2C1690146639994.1690146642882 2023-07-23 21:11:02,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,35573,1690146639994-dead/jenkins-hbase4.apache.org%2C35573%2C1690146639994.1690146642882 after 4ms 2023-07-23 21:11:02,550 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,35573,1690146639994-dead/jenkins-hbase4.apache.org%2C35573%2C1690146639994.1690146642882 to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C35573%2C1690146639994.1690146642882 2023-07-23 21:11:02,550 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,35573,1690146639994-dead 2023-07-23 21:11:02,550 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:02,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38577%2C1690146661744, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,38577,1690146661744, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/oldWALs, maxLogs=10 2023-07-23 21:11:02,567 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:11:02,568 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:11:02,571 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:11:02,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,38577,1690146661744/jenkins-hbase4.apache.org%2C38577%2C1690146661744.1690146662553 2023-07-23 21:11:02,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK]] 2023-07-23 21:11:02,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:02,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:02,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:02,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:02,582 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:02,583 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 21:11:02,583 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 21:11:02,590 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6e0eb0a7955c4e2aa2b5d2d2df7904ef 2023-07-23 21:11:02,590 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:02,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-23 21:11:02,591 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C35573%2C1690146639994.1690146642882 2023-07-23 21:11:02,628 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 939, firstSequenceIdInLog=3, maxSequenceIdInLog=798, path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C35573%2C1690146639994.1690146642882 2023-07-23 21:11:02,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C35573%2C1690146639994.1690146642882 2023-07-23 21:11:02,634 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:02,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/798.seqid, newMaxSeqId=798, maxSeqId=1 2023-07-23 21:11:02,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=799; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10793502560, jitterRate=0.005223259329795837}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:11:02,638 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:11:02,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 21:11:02,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 21:11:02,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 21:11:02,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 21:11:02,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-23 21:11:02,653 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-23 21:11:02,653 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-23 21:11:02,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-23 21:11:02,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-23 21:11:02,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-23 21:11:02,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, REOPEN/MOVE 2023-07-23 21:11:02,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=15, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,36963,1690146641995, splitWal=true, meta=false 2023-07-23 21:11:02,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=16, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-23 21:11:02,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=17, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:11:02,656 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=20, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:11:02,656 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=23, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:11:02,656 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=24, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:11:02,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=45, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:11:02,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=66, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:11:02,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=67, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-23 21:11:02,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=68, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:11:02,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=71, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:11:02,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=74, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:11:02,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=75, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:11:02,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=76, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:11:02,659 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=79, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:11:02,659 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=82, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:11:02,659 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=83, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:11:02,659 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=86, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-23 21:11:02,660 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=87, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-23 21:11:02,660 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=88, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690146655169 type: FLUSH version: 2 ttl: 0 ) 2023-07-23 21:11:02,660 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=91, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:11:02,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=94, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:11:02,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=95, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-23 21:11:02,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=98, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-23 21:11:02,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=99, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-23 21:11:02,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=100, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:11:02,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=101, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:11:02,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=104, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:11:02,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:11:02,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=108, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-23 21:11:02,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 21 msec 2023-07-23 21:11:02,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 21:11:02,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-23 21:11:02,664 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase4.apache.org,42727,1690146641774, table=hbase:meta, region=1588230740 2023-07-23 21:11:02,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 4 possibly 'live' servers, and 0 'splitting'. 2023-07-23 21:11:02,669 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42335,1690146647320 already deleted, retry=false 2023-07-23 21:11:02,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,42335,1690146647320 on jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:02,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,42335,1690146647320, splitWal=true, meta=false 2023-07-23 21:11:02,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=109 for jenkins-hbase4.apache.org,42335,1690146647320 (carryingMeta=false) jenkins-hbase4.apache.org,42335,1690146647320/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@76f569a0[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-23 21:11:02,671 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46485,1690146642211 already deleted, retry=false 2023-07-23 21:11:02,671 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,46485,1690146642211 on jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:02,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,46485,1690146642211, splitWal=true, meta=false 2023-07-23 21:11:02,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=110 for jenkins-hbase4.apache.org,46485,1690146642211 (carryingMeta=false) jenkins-hbase4.apache.org,46485,1690146642211/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@6f6b300c[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-23 21:11:02,673 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42727,1690146641774 already deleted, retry=false 2023-07-23 21:11:02,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,42727,1690146641774 on jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:02,674 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,42727,1690146641774, splitWal=true, meta=true 2023-07-23 21:11:02,674 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=111 for jenkins-hbase4.apache.org,42727,1690146641774 (carryingMeta=true) jenkins-hbase4.apache.org,42727,1690146641774/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@578eceae[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-23 21:11:02,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45637,1690146645550 already deleted, retry=false 2023-07-23 21:11:02,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,45637,1690146645550 on jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:02,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,45637,1690146645550, splitWal=true, meta=false 2023-07-23 21:11:02,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=112 for jenkins-hbase4.apache.org,45637,1690146645550 (carryingMeta=false) jenkins-hbase4.apache.org,45637,1690146645550/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@743fe1bc[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-23 21:11:02,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-23 21:11:02,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 21:11:02,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 21:11:02,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 21:11:02,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 21:11:02,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 21:11:02,682 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:02,682 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:02,682 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:02,682 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:02,682 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:02,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,38577,1690146661744, sessionid=0x1019405901c0010, setting cluster-up flag (Was=false) 2023-07-23 21:11:02,686 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 21:11:02,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:02,690 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 21:11:02,690 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:02,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-23 21:11:02,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-23 21:11:02,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-23 21:11:02,694 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-23 21:11:02,695 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:02,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-23 21:11:02,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-23 21:11:02,700 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:02,701 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:42727 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:42727 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:11:02,702 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:42727 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:42727 2023-07-23 21:11:02,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 21:11:02,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 21:11:02,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 21:11:02,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 21:11:02,712 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:11:02,712 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:11:02,712 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:11:02,712 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:11:02,712 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 21:11:02,712 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,712 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:02,712 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,715 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690146692715 2023-07-23 21:11:02,716 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 21:11:02,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 21:11:02,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 21:11:02,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 21:11:02,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 21:11:02,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 21:11:02,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,723 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42335,1690146647320; numProcessing=1 2023-07-23 21:11:02,723 DEBUG [PEWorker-5] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45637,1690146645550; numProcessing=2 2023-07-23 21:11:02,723 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=109, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,42335,1690146647320, splitWal=true, meta=false 2023-07-23 21:11:02,724 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 21:11:02,724 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42727,1690146641774; numProcessing=3 2023-07-23 21:11:02,724 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 21:11:02,724 INFO [PEWorker-5] procedure.ServerCrashProcedure(161): Start pid=112, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,45637,1690146645550, splitWal=true, meta=false 2023-07-23 21:11:02,724 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 21:11:02,724 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=111, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,42727,1690146641774, splitWal=true, meta=true 2023-07-23 21:11:02,724 DEBUG [PEWorker-3] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46485,1690146642211; numProcessing=4 2023-07-23 21:11:02,724 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=110, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,46485,1690146642211, splitWal=true, meta=false 2023-07-23 21:11:02,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 21:11:02,726 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 21:11:02,726 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=111, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,42727,1690146641774, splitWal=true, meta=true, isMeta: true 2023-07-23 21:11:02,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146662726,5,FailOnTimeoutGroup] 2023-07-23 21:11:02,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146662730,5,FailOnTimeoutGroup] 2023-07-23 21:11:02,730 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 21:11:02,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690146662731, completionTime=-1 2023-07-23 21:11:02,731 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-23 21:11:02,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-23 21:11:02,732 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42727,1690146641774-splitting 2023-07-23 21:11:02,732 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42727,1690146641774-splitting dir is empty, no logs to split. 2023-07-23 21:11:02,733 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,42727,1690146641774 WAL count=0, meta=true 2023-07-23 21:11:02,735 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42727,1690146641774-splitting dir is empty, no logs to split. 2023-07-23 21:11:02,735 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,42727,1690146641774 WAL count=0, meta=true 2023-07-23 21:11:02,735 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,42727,1690146641774 WAL splitting is done? wals=0, meta=true 2023-07-23 21:11:02,736 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 21:11:02,737 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=113, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 21:11:02,738 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=113, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-23 21:11:02,768 INFO [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:11:02,768 INFO [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:11:02,768 INFO [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:11:02,770 DEBUG [RS:2;jenkins-hbase4:39795] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:11:02,771 DEBUG [RS:1;jenkins-hbase4:42175] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:11:02,769 DEBUG [RS:0;jenkins-hbase4:38927] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:11:02,774 DEBUG [RS:1;jenkins-hbase4:42175] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:11:02,774 DEBUG [RS:1;jenkins-hbase4:42175] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:11:02,774 DEBUG [RS:2;jenkins-hbase4:39795] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:11:02,774 DEBUG [RS:2;jenkins-hbase4:39795] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:11:02,774 DEBUG [RS:0;jenkins-hbase4:38927] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:11:02,775 DEBUG [RS:0;jenkins-hbase4:38927] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:11:02,777 DEBUG [RS:1;jenkins-hbase4:42175] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:11:02,778 DEBUG [RS:0;jenkins-hbase4:38927] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:11:02,778 DEBUG [RS:2;jenkins-hbase4:39795] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:11:02,779 DEBUG [RS:1;jenkins-hbase4:42175] zookeeper.ReadOnlyZKClient(139): Connect 0x4d630367 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:02,779 DEBUG [RS:2;jenkins-hbase4:39795] zookeeper.ReadOnlyZKClient(139): Connect 0x168f9b2b to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:02,779 DEBUG [RS:0;jenkins-hbase4:38927] zookeeper.ReadOnlyZKClient(139): Connect 0x5a1ef121 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:02,795 DEBUG [RS:1;jenkins-hbase4:42175] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@43a4e4bb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:02,795 DEBUG [RS:0;jenkins-hbase4:38927] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@54d91412, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:02,795 DEBUG [RS:1;jenkins-hbase4:42175] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@479b6817, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:02,795 DEBUG [RS:2;jenkins-hbase4:39795] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1848778a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:02,795 DEBUG [RS:0;jenkins-hbase4:38927] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@50e69a51, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:02,795 DEBUG [RS:2;jenkins-hbase4:39795] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c77f05b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:02,803 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:42727 this server is in the failed servers list 2023-07-23 21:11:02,806 DEBUG [RS:0;jenkins-hbase4:38927] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38927 2023-07-23 21:11:02,807 INFO [RS:0;jenkins-hbase4:38927] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:11:02,807 INFO [RS:0;jenkins-hbase4:38927] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:11:02,807 DEBUG [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:11:02,807 INFO [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38577,1690146661744 with isa=jenkins-hbase4.apache.org/172.31.14.131:38927, startcode=1690146661924 2023-07-23 21:11:02,807 DEBUG [RS:0;jenkins-hbase4:38927] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:11:02,808 DEBUG [RS:1;jenkins-hbase4:42175] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:42175 2023-07-23 21:11:02,808 DEBUG [RS:2;jenkins-hbase4:39795] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:39795 2023-07-23 21:11:02,808 INFO [RS:1;jenkins-hbase4:42175] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:11:02,808 INFO [RS:1;jenkins-hbase4:42175] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:11:02,808 INFO [RS:2;jenkins-hbase4:39795] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:11:02,809 INFO [RS:2;jenkins-hbase4:39795] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:11:02,809 DEBUG [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:11:02,809 DEBUG [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:11:02,809 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57431, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:11:02,809 INFO [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38577,1690146661744 with isa=jenkins-hbase4.apache.org/172.31.14.131:42175, startcode=1690146662090 2023-07-23 21:11:02,809 INFO [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38577,1690146661744 with isa=jenkins-hbase4.apache.org/172.31.14.131:39795, startcode=1690146662307 2023-07-23 21:11:02,809 DEBUG [RS:1;jenkins-hbase4:42175] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:11:02,809 DEBUG [RS:2;jenkins-hbase4:39795] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:11:02,810 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38577] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:02,810 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:02,811 DEBUG [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:11:02,811 DEBUG [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:11:02,811 DEBUG [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43547 2023-07-23 21:11:02,812 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 21:11:02,812 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39073, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:11:02,812 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51717, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:11:02,812 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38577] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39795,1690146662307 2023-07-23 21:11:02,812 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:02,813 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38577] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:02,813 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-23 21:11:02,813 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:02,813 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 21:11:02,813 DEBUG [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:11:02,813 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:02,813 DEBUG [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:11:02,813 DEBUG [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:11:02,813 DEBUG [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:11:02,813 DEBUG [RS:0;jenkins-hbase4:38927] zookeeper.ZKUtil(162): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:02,813 DEBUG [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43547 2023-07-23 21:11:02,813 WARN [RS:0;jenkins-hbase4:38927] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:02,813 DEBUG [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43547 2023-07-23 21:11:02,814 INFO [RS:0;jenkins-hbase4:38927] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:02,814 DEBUG [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:02,814 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38927,1690146661924] 2023-07-23 21:11:02,818 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:02,819 DEBUG [RS:1;jenkins-hbase4:42175] zookeeper.ZKUtil(162): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:02,819 DEBUG [RS:2;jenkins-hbase4:39795] zookeeper.ZKUtil(162): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39795,1690146662307 2023-07-23 21:11:02,819 WARN [RS:1;jenkins-hbase4:42175] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:02,819 WARN [RS:2;jenkins-hbase4:39795] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:02,819 INFO [RS:1;jenkins-hbase4:42175] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:02,819 INFO [RS:2;jenkins-hbase4:39795] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:02,819 DEBUG [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:02,819 DEBUG [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,39795,1690146662307 2023-07-23 21:11:02,819 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42175,1690146662090] 2023-07-23 21:11:02,819 DEBUG [RS:0;jenkins-hbase4:38927] zookeeper.ZKUtil(162): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:02,819 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39795,1690146662307] 2023-07-23 21:11:02,820 DEBUG [RS:0;jenkins-hbase4:38927] zookeeper.ZKUtil(162): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:02,820 DEBUG [RS:0;jenkins-hbase4:38927] zookeeper.ZKUtil(162): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39795,1690146662307 2023-07-23 21:11:02,821 DEBUG [RS:0;jenkins-hbase4:38927] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:11:02,821 INFO [RS:0;jenkins-hbase4:38927] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:11:02,830 INFO [RS:0;jenkins-hbase4:38927] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:11:02,831 INFO [RS:0;jenkins-hbase4:38927] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:11:02,831 INFO [RS:0;jenkins-hbase4:38927] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=100ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-23 21:11:02,835 INFO [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:11:02,838 DEBUG [RS:2;jenkins-hbase4:39795] zookeeper.ZKUtil(162): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:02,838 DEBUG [RS:2;jenkins-hbase4:39795] zookeeper.ZKUtil(162): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:02,838 DEBUG [RS:2;jenkins-hbase4:39795] zookeeper.ZKUtil(162): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39795,1690146662307 2023-07-23 21:11:02,840 DEBUG [RS:2;jenkins-hbase4:39795] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:11:02,840 INFO [RS:2;jenkins-hbase4:39795] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:11:02,842 INFO [RS:2;jenkins-hbase4:39795] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:11:02,844 INFO [RS:2;jenkins-hbase4:39795] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:11:02,844 INFO [RS:2;jenkins-hbase4:39795] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,844 INFO [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:11:02,846 INFO [RS:2;jenkins-hbase4:39795] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,846 INFO [RS:0;jenkins-hbase4:38927] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,846 DEBUG [RS:2;jenkins-hbase4:39795] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,846 DEBUG [RS:0;jenkins-hbase4:38927] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,846 DEBUG [RS:2;jenkins-hbase4:39795] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,846 DEBUG [RS:0;jenkins-hbase4:38927] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,846 DEBUG [RS:2;jenkins-hbase4:39795] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,846 DEBUG [RS:0;jenkins-hbase4:38927] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,846 DEBUG [RS:2;jenkins-hbase4:39795] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,847 DEBUG [RS:0;jenkins-hbase4:38927] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,847 DEBUG [RS:2;jenkins-hbase4:39795] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,847 DEBUG [RS:0;jenkins-hbase4:38927] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,847 DEBUG [RS:2;jenkins-hbase4:39795] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:02,847 DEBUG [RS:0;jenkins-hbase4:38927] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:02,847 DEBUG [RS:2;jenkins-hbase4:39795] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,848 DEBUG [RS:2;jenkins-hbase4:39795] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,847 DEBUG [RS:0;jenkins-hbase4:38927] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,848 DEBUG [RS:2;jenkins-hbase4:39795] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,848 DEBUG [RS:0;jenkins-hbase4:38927] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,848 DEBUG [RS:2;jenkins-hbase4:39795] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,848 DEBUG [RS:1;jenkins-hbase4:42175] zookeeper.ZKUtil(162): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:02,848 DEBUG [RS:0;jenkins-hbase4:38927] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,848 DEBUG [RS:0;jenkins-hbase4:38927] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,848 DEBUG [RS:1;jenkins-hbase4:42175] zookeeper.ZKUtil(162): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:02,849 DEBUG [RS:1;jenkins-hbase4:42175] zookeeper.ZKUtil(162): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39795,1690146662307 2023-07-23 21:11:02,850 DEBUG [RS:1;jenkins-hbase4:42175] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:11:02,850 INFO [RS:1;jenkins-hbase4:42175] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:11:02,855 INFO [RS:1;jenkins-hbase4:42175] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:11:02,856 INFO [RS:2;jenkins-hbase4:39795] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,856 INFO [RS:1;jenkins-hbase4:42175] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:11:02,857 INFO [RS:1;jenkins-hbase4:42175] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,856 INFO [RS:2;jenkins-hbase4:39795] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,857 INFO [RS:2;jenkins-hbase4:39795] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,857 INFO [RS:2;jenkins-hbase4:39795] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,864 INFO [RS:0;jenkins-hbase4:38927] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,864 INFO [RS:0;jenkins-hbase4:38927] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,864 INFO [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:11:02,864 INFO [RS:0;jenkins-hbase4:38927] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,865 INFO [RS:0;jenkins-hbase4:38927] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,870 INFO [RS:1;jenkins-hbase4:42175] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,870 DEBUG [RS:1;jenkins-hbase4:42175] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,870 DEBUG [RS:1;jenkins-hbase4:42175] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,870 DEBUG [RS:1;jenkins-hbase4:42175] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,870 DEBUG [RS:1;jenkins-hbase4:42175] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,870 DEBUG [RS:1;jenkins-hbase4:42175] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,870 DEBUG [RS:1;jenkins-hbase4:42175] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:02,870 DEBUG [RS:1;jenkins-hbase4:42175] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,870 DEBUG [RS:1;jenkins-hbase4:42175] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,870 DEBUG [RS:1;jenkins-hbase4:42175] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,870 DEBUG [RS:1;jenkins-hbase4:42175] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,871 INFO [RS:1;jenkins-hbase4:42175] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,872 INFO [RS:1;jenkins-hbase4:42175] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,872 INFO [RS:1;jenkins-hbase4:42175] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,873 INFO [RS:1;jenkins-hbase4:42175] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,881 INFO [RS:2;jenkins-hbase4:39795] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:11:02,881 INFO [RS:2;jenkins-hbase4:39795] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39795,1690146662307-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,885 INFO [RS:0;jenkins-hbase4:38927] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:11:02,885 INFO [RS:0;jenkins-hbase4:38927] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38927,1690146661924-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,888 DEBUG [jenkins-hbase4:38577] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 21:11:02,888 INFO [RS:1;jenkins-hbase4:42175] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:11:02,888 DEBUG [jenkins-hbase4:38577] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:11:02,888 INFO [RS:1;jenkins-hbase4:42175] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42175,1690146662090-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,888 DEBUG [jenkins-hbase4:38577] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:11:02,888 DEBUG [jenkins-hbase4:38577] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:11:02,888 DEBUG [jenkins-hbase4:38577] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:11:02,888 DEBUG [jenkins-hbase4:38577] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:11:02,891 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42175,1690146662090, state=OPENING 2023-07-23 21:11:02,893 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 21:11:02,893 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:11:02,893 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42175,1690146662090}] 2023-07-23 21:11:02,901 INFO [RS:2;jenkins-hbase4:39795] regionserver.Replication(203): jenkins-hbase4.apache.org,39795,1690146662307 started 2023-07-23 21:11:02,901 INFO [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39795,1690146662307, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39795, sessionid=0x1019405901c0013 2023-07-23 21:11:02,901 DEBUG [RS:2;jenkins-hbase4:39795] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:11:02,902 DEBUG [RS:2;jenkins-hbase4:39795] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39795,1690146662307 2023-07-23 21:11:02,902 DEBUG [RS:2;jenkins-hbase4:39795] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39795,1690146662307' 2023-07-23 21:11:02,902 DEBUG [RS:2;jenkins-hbase4:39795] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:11:02,902 DEBUG [RS:2;jenkins-hbase4:39795] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:11:02,902 DEBUG [RS:2;jenkins-hbase4:39795] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:11:02,903 DEBUG [RS:2;jenkins-hbase4:39795] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:11:02,903 DEBUG [RS:2;jenkins-hbase4:39795] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39795,1690146662307 2023-07-23 21:11:02,903 DEBUG [RS:2;jenkins-hbase4:39795] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39795,1690146662307' 2023-07-23 21:11:02,903 DEBUG [RS:2;jenkins-hbase4:39795] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:11:02,903 DEBUG [RS:2;jenkins-hbase4:39795] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:11:02,903 DEBUG [RS:2;jenkins-hbase4:39795] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:11:02,903 INFO [RS:2;jenkins-hbase4:39795] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-23 21:11:02,906 INFO [RS:2;jenkins-hbase4:39795] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,907 INFO [RS:0;jenkins-hbase4:38927] regionserver.Replication(203): jenkins-hbase4.apache.org,38927,1690146661924 started 2023-07-23 21:11:02,907 INFO [RS:1;jenkins-hbase4:42175] regionserver.Replication(203): jenkins-hbase4.apache.org,42175,1690146662090 started 2023-07-23 21:11:02,907 INFO [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38927,1690146661924, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38927, sessionid=0x1019405901c0011 2023-07-23 21:11:02,907 DEBUG [RS:2;jenkins-hbase4:39795] zookeeper.ZKUtil(398): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-23 21:11:02,907 DEBUG [RS:0;jenkins-hbase4:38927] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:11:02,907 INFO [RS:2;jenkins-hbase4:39795] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-23 21:11:02,907 INFO [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42175,1690146662090, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42175, sessionid=0x1019405901c0012 2023-07-23 21:11:02,907 DEBUG [RS:0;jenkins-hbase4:38927] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:02,907 DEBUG [RS:0;jenkins-hbase4:38927] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38927,1690146661924' 2023-07-23 21:11:02,907 DEBUG [RS:0;jenkins-hbase4:38927] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:11:02,907 DEBUG [RS:1;jenkins-hbase4:42175] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:11:02,907 DEBUG [RS:1;jenkins-hbase4:42175] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:02,907 DEBUG [RS:1;jenkins-hbase4:42175] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42175,1690146662090' 2023-07-23 21:11:02,907 DEBUG [RS:1;jenkins-hbase4:42175] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:11:02,908 INFO [RS:2;jenkins-hbase4:39795] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,908 DEBUG [RS:0;jenkins-hbase4:38927] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:11:02,908 INFO [RS:2;jenkins-hbase4:39795] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,908 DEBUG [RS:1;jenkins-hbase4:42175] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:11:02,908 DEBUG [RS:0;jenkins-hbase4:38927] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:11:02,908 DEBUG [RS:0;jenkins-hbase4:38927] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:11:02,908 DEBUG [RS:0;jenkins-hbase4:38927] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:02,908 DEBUG [RS:0;jenkins-hbase4:38927] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38927,1690146661924' 2023-07-23 21:11:02,908 DEBUG [RS:0;jenkins-hbase4:38927] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:11:02,908 DEBUG [RS:1;jenkins-hbase4:42175] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:11:02,908 DEBUG [RS:1;jenkins-hbase4:42175] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:11:02,908 DEBUG [RS:1;jenkins-hbase4:42175] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:02,908 DEBUG [RS:1;jenkins-hbase4:42175] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42175,1690146662090' 2023-07-23 21:11:02,909 DEBUG [RS:1;jenkins-hbase4:42175] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:11:02,909 DEBUG [RS:0;jenkins-hbase4:38927] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:11:02,909 DEBUG [RS:1;jenkins-hbase4:42175] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:11:02,909 DEBUG [RS:0;jenkins-hbase4:38927] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:11:02,909 INFO [RS:0;jenkins-hbase4:38927] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-23 21:11:02,909 DEBUG [RS:1;jenkins-hbase4:42175] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:11:02,909 INFO [RS:0;jenkins-hbase4:38927] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,909 INFO [RS:1;jenkins-hbase4:42175] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-23 21:11:02,910 INFO [RS:1;jenkins-hbase4:42175] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,910 DEBUG [RS:0;jenkins-hbase4:38927] zookeeper.ZKUtil(398): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-23 21:11:02,910 DEBUG [RS:1;jenkins-hbase4:42175] zookeeper.ZKUtil(398): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-23 21:11:02,910 INFO [RS:0;jenkins-hbase4:38927] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-23 21:11:02,910 INFO [RS:1;jenkins-hbase4:42175] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-23 21:11:02,910 INFO [RS:0;jenkins-hbase4:38927] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,910 INFO [RS:1;jenkins-hbase4:42175] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,910 INFO [RS:0;jenkins-hbase4:38927] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,910 INFO [RS:1;jenkins-hbase4:42175] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:03,006 WARN [ReadOnlyZKClient-127.0.0.1:59847@0x308e224c] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-23 21:11:03,006 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:03,008 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36240, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:03,009 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42175] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:36240 deadline: 1690146723008, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:03,011 INFO [RS:2;jenkins-hbase4:39795] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39795%2C1690146662307, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,39795,1690146662307, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:11:03,012 INFO [RS:0;jenkins-hbase4:38927] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38927%2C1690146661924, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,38927,1690146661924, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:11:03,012 INFO [RS:1;jenkins-hbase4:42175] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42175%2C1690146662090, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42175,1690146662090, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:11:03,034 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:11:03,035 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:11:03,035 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:11:03,035 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:11:03,035 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:11:03,036 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:11:03,038 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:11:03,039 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:11:03,039 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:11:03,040 INFO [RS:0;jenkins-hbase4:38927] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,38927,1690146661924/jenkins-hbase4.apache.org%2C38927%2C1690146661924.1690146663014 2023-07-23 21:11:03,040 INFO [RS:1;jenkins-hbase4:42175] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42175,1690146662090/jenkins-hbase4.apache.org%2C42175%2C1690146662090.1690146663014 2023-07-23 21:11:03,041 INFO [RS:2;jenkins-hbase4:39795] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,39795,1690146662307/jenkins-hbase4.apache.org%2C39795%2C1690146662307.1690146663012 2023-07-23 21:11:03,042 DEBUG [RS:0;jenkins-hbase4:38927] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK], DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK]] 2023-07-23 21:11:03,047 DEBUG [RS:1;jenkins-hbase4:42175] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK]] 2023-07-23 21:11:03,047 DEBUG [RS:2;jenkins-hbase4:39795] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK]] 2023-07-23 21:11:03,048 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:03,050 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:11:03,053 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36244, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:11:03,056 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 21:11:03,057 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:03,058 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42175%2C1690146662090.meta, suffix=.meta, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42175,1690146662090, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:11:03,072 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:11:03,072 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:11:03,072 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:11:03,074 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42175,1690146662090/jenkins-hbase4.apache.org%2C42175%2C1690146662090.meta.1690146663059.meta 2023-07-23 21:11:03,074 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK]] 2023-07-23 21:11:03,074 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:03,074 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:11:03,075 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 21:11:03,075 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 21:11:03,075 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 21:11:03,075 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:03,075 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 21:11:03,075 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 21:11:03,076 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 21:11:03,077 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info 2023-07-23 21:11:03,077 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info 2023-07-23 21:11:03,078 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 21:11:03,086 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15c38b0ea71c46adb63b62b92a154f8d 2023-07-23 21:11:03,086 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/15c38b0ea71c46adb63b62b92a154f8d 2023-07-23 21:11:03,086 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:03,086 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 21:11:03,087 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:11:03,087 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:11:03,087 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 21:11:03,094 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 27a108a6498540b9881fffee97f83a46 2023-07-23 21:11:03,094 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier/27a108a6498540b9881fffee97f83a46 2023-07-23 21:11:03,095 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:03,095 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 21:11:03,096 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table 2023-07-23 21:11:03,096 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table 2023-07-23 21:11:03,096 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 21:11:03,102 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b405931d76734326ace1ba7e7a4c97d4 2023-07-23 21:11:03,102 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table/b405931d76734326ace1ba7e7a4c97d4 2023-07-23 21:11:03,102 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:03,103 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740 2023-07-23 21:11:03,104 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740 2023-07-23 21:11:03,108 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 21:11:03,110 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 21:11:03,113 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=142; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9945712000, jitterRate=-0.073733389377594}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 21:11:03,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 21:11:03,115 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=114, masterSystemTime=1690146663048 2023-07-23 21:11:03,124 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42175,1690146662090, state=OPEN 2023-07-23 21:11:03,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 21:11:03,125 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 21:11:03,125 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:11:03,127 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 21:11:03,129 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-23 21:11:03,129 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42175,1690146662090 in 234 msec 2023-07-23 21:11:03,130 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=111 2023-07-23 21:11:03,130 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=111, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 394 msec 2023-07-23 21:11:03,324 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:03,325 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:45637 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:45637 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:11:03,326 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:45637 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:45637 2023-07-23 21:11:03,431 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:45637 this server is in the failed servers list 2023-07-23 21:11:03,636 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:45637 this server is in the failed servers list 2023-07-23 21:11:03,941 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:45637 this server is in the failed servers list 2023-07-23 21:11:04,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1603ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1503ms 2023-07-23 21:11:04,452 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:45637 this server is in the failed servers list 2023-07-23 21:11:05,459 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:45637 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:45637 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:11:05,461 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:45637 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:45637 2023-07-23 21:11:05,836 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3105ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3005ms 2023-07-23 21:11:07,239 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4508ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-23 21:11:07,239 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 21:11:07,242 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=cfdae6c1dde0d9be1f26f623634660ba, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,42727,1690146641774, regionLocation=jenkins-hbase4.apache.org,42727,1690146641774, openSeqNum=2 2023-07-23 21:11:07,243 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,45637,1690146645550, regionLocation=jenkins-hbase4.apache.org,45637,1690146645550, openSeqNum=13 2023-07-23 21:11:07,243 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 21:11:07,243 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690146727243 2023-07-23 21:11:07,243 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690146787243 2023-07-23 21:11:07,243 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-23 21:11:07,262 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,42727,1690146641774 had 2 regions 2023-07-23 21:11:07,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38577,1690146661744-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:07,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38577,1690146661744-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:07,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38577,1690146661744-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:07,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:38577, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:07,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:07,264 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,46485,1690146642211 had 0 regions 2023-07-23 21:11:07,264 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. is NOT online; state={cfdae6c1dde0d9be1f26f623634660ba state=OPEN, ts=1690146667242, server=jenkins-hbase4.apache.org,42727,1690146641774}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-23 21:11:07,264 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,45637,1690146645550 had 1 regions 2023-07-23 21:11:07,266 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=110, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,46485,1690146642211, splitWal=true, meta=false, isMeta: false 2023-07-23 21:11:07,266 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=112, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,45637,1690146645550, splitWal=true, meta=false, isMeta: false 2023-07-23 21:11:07,269 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=111, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,42727,1690146641774, splitWal=true, meta=true, isMeta: false 2023-07-23 21:11:07,269 INFO [PEWorker-1] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,42335,1690146647320 had 0 regions 2023-07-23 21:11:07,270 DEBUG [PEWorker-5] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,46485,1690146642211-splitting 2023-07-23 21:11:07,272 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,46485,1690146642211-splitting dir is empty, no logs to split. 2023-07-23 21:11:07,272 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,46485,1690146642211 WAL count=0, meta=false 2023-07-23 21:11:07,273 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45637,1690146645550-splitting 2023-07-23 21:11:07,274 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=109, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,42335,1690146647320, splitWal=true, meta=false, isMeta: false 2023-07-23 21:11:07,274 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45637,1690146645550-splitting dir is empty, no logs to split. 2023-07-23 21:11:07,274 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,45637,1690146645550 WAL count=0, meta=false 2023-07-23 21:11:07,277 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42727,1690146641774-splitting dir is empty, no logs to split. 2023-07-23 21:11:07,277 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,42727,1690146641774 WAL count=0, meta=false 2023-07-23 21:11:07,277 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42335,1690146647320-splitting 2023-07-23 21:11:07,279 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42335,1690146647320-splitting dir is empty, no logs to split. 2023-07-23 21:11:07,279 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,42335,1690146647320 WAL count=0, meta=false 2023-07-23 21:11:07,281 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,46485,1690146642211-splitting dir is empty, no logs to split. 2023-07-23 21:11:07,281 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,46485,1690146642211 WAL count=0, meta=false 2023-07-23 21:11:07,281 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,46485,1690146642211 WAL splitting is done? wals=0, meta=false 2023-07-23 21:11:07,286 WARN [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase4.apache.org,42727,1690146641774/hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba., unknown_server=jenkins-hbase4.apache.org,45637,1690146645550/hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:07,288 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42335,1690146647320-splitting dir is empty, no logs to split. 2023-07-23 21:11:07,288 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,42335,1690146647320 WAL count=0, meta=false 2023-07-23 21:11:07,288 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,42335,1690146647320 WAL splitting is done? wals=0, meta=false 2023-07-23 21:11:07,289 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45637,1690146645550-splitting dir is empty, no logs to split. 2023-07-23 21:11:07,289 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,45637,1690146645550 WAL count=0, meta=false 2023-07-23 21:11:07,289 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,45637,1690146645550 WAL splitting is done? wals=0, meta=false 2023-07-23 21:11:07,289 INFO [PEWorker-5] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,46485,1690146642211 failed, ignore...File hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,46485,1690146642211-splitting does not exist. 2023-07-23 21:11:07,290 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42727,1690146641774-splitting dir is empty, no logs to split. 2023-07-23 21:11:07,290 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,42727,1690146641774 WAL count=0, meta=false 2023-07-23 21:11:07,290 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,42727,1690146641774 WAL splitting is done? wals=0, meta=false 2023-07-23 21:11:07,292 INFO [PEWorker-1] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,42335,1690146647320 failed, ignore...File hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42335,1690146647320-splitting does not exist. 2023-07-23 21:11:07,293 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,45637,1690146645550 failed, ignore...File hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45637,1690146645550-splitting does not exist. 2023-07-23 21:11:07,295 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,46485,1690146642211 after splitting done 2023-07-23 21:11:07,295 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase4.apache.org,46485,1690146642211 from processing; numProcessing=3 2023-07-23 21:11:07,295 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cfdae6c1dde0d9be1f26f623634660ba, ASSIGN}] 2023-07-23 21:11:07,295 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=112, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, ASSIGN}] 2023-07-23 21:11:07,296 INFO [PEWorker-1] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,42335,1690146647320 after splitting done 2023-07-23 21:11:07,296 DEBUG [PEWorker-1] master.DeadServer(114): Removed jenkins-hbase4.apache.org,42335,1690146647320 from processing; numProcessing=2 2023-07-23 21:11:07,297 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cfdae6c1dde0d9be1f26f623634660ba, ASSIGN 2023-07-23 21:11:07,300 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=116, ppid=112, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, ASSIGN 2023-07-23 21:11:07,301 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,46485,1690146642211, splitWal=true, meta=false in 4.6240 sec 2023-07-23 21:11:07,301 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=cfdae6c1dde0d9be1f26f623634660ba, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-23 21:11:07,301 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=116, ppid=112, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-23 21:11:07,301 DEBUG [jenkins-hbase4:38577] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 21:11:07,302 DEBUG [jenkins-hbase4:38577] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:11:07,302 DEBUG [jenkins-hbase4:38577] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:11:07,302 DEBUG [jenkins-hbase4:38577] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:11:07,302 DEBUG [jenkins-hbase4:38577] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:11:07,302 DEBUG [jenkins-hbase4:38577] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-23 21:11:07,302 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,42335,1690146647320, splitWal=true, meta=false in 4.6270 sec 2023-07-23 21:11:07,304 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=cfdae6c1dde0d9be1f26f623634660ba, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:07,304 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=116 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:07,305 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146667304"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146667304"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146667304"}]},"ts":"1690146667304"} 2023-07-23 21:11:07,305 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146667304"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146667304"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146667304"}]},"ts":"1690146667304"} 2023-07-23 21:11:07,307 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=117, ppid=115, state=RUNNABLE; OpenRegionProcedure cfdae6c1dde0d9be1f26f623634660ba, server=jenkins-hbase4.apache.org,42175,1690146662090}] 2023-07-23 21:11:07,308 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=116, state=RUNNABLE; OpenRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,38927,1690146661924}] 2023-07-23 21:11:07,462 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:07,462 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:11:07,464 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34642, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:11:07,467 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:07,476 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cfdae6c1dde0d9be1f26f623634660ba, NAME => 'hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:07,476 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:07,477 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:07,477 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:07,477 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:07,480 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:07,480 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 674d6b4e3c5d6a4f0860e9c874b3e183, NAME => 'hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:07,481 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:11:07,481 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. service=MultiRowMutationService 2023-07-23 21:11:07,481 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 21:11:07,481 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:07,481 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:07,481 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:07,481 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:07,483 INFO [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:07,484 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:45637 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:45637 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:11:07,484 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:45637 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:45637 2023-07-23 21:11:07,486 DEBUG [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/info 2023-07-23 21:11:07,487 DEBUG [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/info 2023-07-23 21:11:07,487 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:07,487 INFO [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cfdae6c1dde0d9be1f26f623634660ba columnFamilyName info 2023-07-23 21:11:07,488 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m 2023-07-23 21:11:07,488 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m 2023-07-23 21:11:07,489 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 674d6b4e3c5d6a4f0860e9c874b3e183 columnFamilyName m 2023-07-23 21:11:07,491 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=46, started=4173 ms ago, cancelled=false, msg=Call to address=jenkins-hbase4.apache.org/172.31.14.131:45637 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:45637, details=row '\x00' on table 'hbase:rsgroup' at region=hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183., hostname=jenkins-hbase4.apache.org,45637,1690146645550, seqNum=13, see https://s.apache.org/timeout, exception=java.net.ConnectException: Call to address=jenkins-hbase4.apache.org/172.31.14.131:45637 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:45637 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:186) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:99) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:398) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:368) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:396) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:195) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$300(NettyRpcConnection.java:76) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:296) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:287) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:674) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:693) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:45637 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:11:07,519 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 39241fc32c9441b98ec8f405a6015e4c 2023-07-23 21:11:07,520 DEBUG [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/info/39241fc32c9441b98ec8f405a6015e4c 2023-07-23 21:11:07,520 INFO [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] regionserver.HStore(310): Store=cfdae6c1dde0d9be1f26f623634660ba/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:07,520 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/c559075bcb8741e4859507bb7fb7cfc8 2023-07-23 21:11:07,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:07,523 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:07,527 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef999fa06b66465f978c7309df40e37f 2023-07-23 21:11:07,527 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/ef999fa06b66465f978c7309df40e37f 2023-07-23 21:11:07,527 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(310): Store=674d6b4e3c5d6a4f0860e9c874b3e183/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:07,528 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:07,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:07,532 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:07,533 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cfdae6c1dde0d9be1f26f623634660ba; next sequenceid=15; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11591364960, jitterRate=0.07952998578548431}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:11:07,533 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cfdae6c1dde0d9be1f26f623634660ba: 2023-07-23 21:11:07,535 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba., pid=117, masterSystemTime=1690146667462 2023-07-23 21:11:07,535 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:07,536 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:07,536 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:07,537 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 674d6b4e3c5d6a4f0860e9c874b3e183; next sequenceid=77; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@503217e6, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:11:07,537 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 674d6b4e3c5d6a4f0860e9c874b3e183: 2023-07-23 21:11:07,537 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=cfdae6c1dde0d9be1f26f623634660ba, regionState=OPEN, openSeqNum=15, regionLocation=jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:07,537 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146667537"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146667537"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146667537"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146667537"}]},"ts":"1690146667537"} 2023-07-23 21:11:07,537 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183., pid=118, masterSystemTime=1690146667462 2023-07-23 21:11:07,541 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:07,542 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:07,542 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=116 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=OPEN, openSeqNum=77, regionLocation=jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:07,542 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146667542"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146667542"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146667542"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146667542"}]},"ts":"1690146667542"} 2023-07-23 21:11:07,543 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=117, resume processing ppid=115 2023-07-23 21:11:07,543 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, ppid=115, state=SUCCESS; OpenRegionProcedure cfdae6c1dde0d9be1f26f623634660ba, server=jenkins-hbase4.apache.org,42175,1690146662090 in 234 msec 2023-07-23 21:11:07,545 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=111 2023-07-23 21:11:07,545 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=111, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=cfdae6c1dde0d9be1f26f623634660ba, ASSIGN in 248 msec 2023-07-23 21:11:07,545 INFO [PEWorker-4] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,42727,1690146641774 after splitting done 2023-07-23 21:11:07,545 DEBUG [PEWorker-4] master.DeadServer(114): Removed jenkins-hbase4.apache.org,42727,1690146641774 from processing; numProcessing=1 2023-07-23 21:11:07,546 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=116 2023-07-23 21:11:07,546 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=116, state=SUCCESS; OpenRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,38927,1690146661924 in 236 msec 2023-07-23 21:11:07,547 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,42727,1690146641774, splitWal=true, meta=true in 4.8720 sec 2023-07-23 21:11:07,549 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=112 2023-07-23 21:11:07,549 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,45637,1690146645550 after splitting done 2023-07-23 21:11:07,549 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=112, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, ASSIGN in 251 msec 2023-07-23 21:11:07,549 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase4.apache.org,45637,1690146645550 from processing; numProcessing=0 2023-07-23 21:11:07,551 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,45637,1690146645550, splitWal=true, meta=false in 4.8740 sec 2023-07-23 21:11:08,265 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-23 21:11:08,285 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 21:11:08,289 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 21:11:08,289 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.808sec 2023-07-23 21:11:08,289 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-23 21:11:08,289 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:11:08,291 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=119, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-23 21:11:08,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-23 21:11:08,293 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=119, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:11:08,294 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-23 21:11:08,294 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=119, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:11:08,307 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/hbase/quota/99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:08,308 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/hbase/quota/99f4bb247673f611dc82de993563e38b empty. 2023-07-23 21:11:08,309 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/hbase/quota/99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:08,309 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-23 21:11:08,311 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-23 21:11:08,311 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-23 21:11:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 21:11:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 21:11:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38577,1690146661744-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 21:11:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38577,1690146661744-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 21:11:08,319 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 21:11:08,327 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-23 21:11:08,328 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 99f4bb247673f611dc82de993563e38b, NAME => 'hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.tmp 2023-07-23 21:11:08,342 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:08,342 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 99f4bb247673f611dc82de993563e38b, disabling compactions & flushes 2023-07-23 21:11:08,342 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:08,342 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:08,342 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. after waiting 0 ms 2023-07-23 21:11:08,342 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:08,342 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:08,343 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 99f4bb247673f611dc82de993563e38b: 2023-07-23 21:11:08,349 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=119, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:11:08,350 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690146668349"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146668349"}]},"ts":"1690146668349"} 2023-07-23 21:11:08,351 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:11:08,353 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=119, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:11:08,353 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146668353"}]},"ts":"1690146668353"} 2023-07-23 21:11:08,354 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-23 21:11:08,357 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:11:08,357 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:11:08,357 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:11:08,357 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:11:08,357 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:11:08,360 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=120, ppid=119, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=99f4bb247673f611dc82de993563e38b, ASSIGN}] 2023-07-23 21:11:08,362 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, ppid=119, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=99f4bb247673f611dc82de993563e38b, ASSIGN 2023-07-23 21:11:08,362 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=120, ppid=119, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=99f4bb247673f611dc82de993563e38b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42175,1690146662090; forceNewPlan=false, retain=false 2023-07-23 21:11:08,372 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(139): Connect 0x17d152d6 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:08,377 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@16b99113, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:08,380 DEBUG [hconnection-0x3cfcd086-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:08,381 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36254, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:08,386 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-23 21:11:08,386 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x17d152d6 to 127.0.0.1:59847 2023-07-23 21:11:08,386 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:08,389 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase4.apache.org:38577 after: jenkins-hbase4.apache.org:38577 2023-07-23 21:11:08,389 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(139): Connect 0x55d026c4 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:08,403 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ff2a418, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:08,403 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:08,513 INFO [jenkins-hbase4:38577] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:11:08,514 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=99f4bb247673f611dc82de993563e38b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:08,514 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690146668514"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146668514"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146668514"}]},"ts":"1690146668514"} 2023-07-23 21:11:08,516 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; OpenRegionProcedure 99f4bb247673f611dc82de993563e38b, server=jenkins-hbase4.apache.org,42175,1690146662090}] 2023-07-23 21:11:08,580 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 21:11:08,673 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:08,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 99f4bb247673f611dc82de993563e38b, NAME => 'hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:08,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:08,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:08,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:08,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:08,679 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:08,681 DEBUG [StoreOpener-99f4bb247673f611dc82de993563e38b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/q 2023-07-23 21:11:08,681 DEBUG [StoreOpener-99f4bb247673f611dc82de993563e38b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/q 2023-07-23 21:11:08,681 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 99f4bb247673f611dc82de993563e38b columnFamilyName q 2023-07-23 21:11:08,682 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] regionserver.HStore(310): Store=99f4bb247673f611dc82de993563e38b/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:08,682 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:08,684 DEBUG [StoreOpener-99f4bb247673f611dc82de993563e38b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/u 2023-07-23 21:11:08,684 DEBUG [StoreOpener-99f4bb247673f611dc82de993563e38b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/u 2023-07-23 21:11:08,685 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 99f4bb247673f611dc82de993563e38b columnFamilyName u 2023-07-23 21:11:08,685 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] regionserver.HStore(310): Store=99f4bb247673f611dc82de993563e38b/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:08,686 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:08,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:08,691 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-23 21:11:08,692 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:08,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:11:08,696 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 99f4bb247673f611dc82de993563e38b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10435054240, jitterRate=-0.028159841895103455}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-23 21:11:08,696 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 99f4bb247673f611dc82de993563e38b: 2023-07-23 21:11:08,697 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b., pid=121, masterSystemTime=1690146668668 2023-07-23 21:11:08,700 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=99f4bb247673f611dc82de993563e38b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:08,700 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690146668700"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146668700"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146668700"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146668700"}]},"ts":"1690146668700"} 2023-07-23 21:11:08,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:08,701 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:08,706 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-23 21:11:08,706 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; OpenRegionProcedure 99f4bb247673f611dc82de993563e38b, server=jenkins-hbase4.apache.org,42175,1690146662090 in 186 msec 2023-07-23 21:11:08,712 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=120, resume processing ppid=119 2023-07-23 21:11:08,713 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=120, ppid=119, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=99f4bb247673f611dc82de993563e38b, ASSIGN in 349 msec 2023-07-23 21:11:08,714 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=119, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:11:08,714 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146668714"}]},"ts":"1690146668714"} 2023-07-23 21:11:08,716 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-23 21:11:08,719 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=119, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:11:08,723 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=119, state=SUCCESS; CreateTableProcedure table=hbase:quota in 430 msec 2023-07-23 21:11:08,821 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-23 21:11:08,850 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-23 21:11:08,851 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-23 21:11:08,852 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-23 21:11:10,166 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 21:11:10,167 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-23 21:11:10,167 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-23 21:11:10,167 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-07-23 21:11:10,167 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:11:10,167 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-23 21:11:10,167 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 21:11:10,168 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-23 21:11:11,533 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:11,535 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53472, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:11,536 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-23 21:11:11,536 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-23 21:11:11,548 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:11,548 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:11,548 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:11,550 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-23 21:11:11,550 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38577,1690146661744] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-23 21:11:11,607 DEBUG [Listener at localhost/38995] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 21:11:11,609 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50354, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 21:11:11,611 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-23 21:11:11,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38577] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-23 21:11:11,613 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(139): Connect 0x09a1fd58 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:11,619 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68be1ce5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:11,619 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:11:11,628 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [90,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:11,630 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:11,631 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1019405901c001b connected 2023-07-23 21:11:11,631 DEBUG [Listener at localhost/38995] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:11,633 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40580, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:11,641 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-23 21:11:11,641 INFO [Listener at localhost/38995] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 21:11:11,641 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x55d026c4 to 127.0.0.1:59847 2023-07-23 21:11:11,641 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:11,641 DEBUG [Listener at localhost/38995] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 21:11:11,641 DEBUG [Listener at localhost/38995] util.JVMClusterUtil(257): Found active master hash=562424787, stopped=false 2023-07-23 21:11:11,641 DEBUG [Listener at localhost/38995] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 21:11:11,641 DEBUG [Listener at localhost/38995] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 21:11:11,641 DEBUG [Listener at localhost/38995] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-23 21:11:11,641 INFO [Listener at localhost/38995] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:11,643 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:11,643 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:11,643 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:11,643 INFO [Listener at localhost/38995] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 21:11:11,643 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:11,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:11,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:11,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:11,644 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x308e224c to 127.0.0.1:59847 2023-07-23 21:11:11,643 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:11,646 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:11,646 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:11,646 INFO [Listener at localhost/38995] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38927,1690146661924' ***** 2023-07-23 21:11:11,646 INFO [Listener at localhost/38995] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:11:11,646 INFO [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:11,647 INFO [Listener at localhost/38995] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42175,1690146662090' ***** 2023-07-23 21:11:11,649 INFO [Listener at localhost/38995] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:11:11,649 INFO [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:11,650 INFO [Listener at localhost/38995] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39795,1690146662307' ***** 2023-07-23 21:11:11,652 INFO [Listener at localhost/38995] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:11:11,652 INFO [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:11,662 INFO [RS:0;jenkins-hbase4:38927] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@798231e4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:11,662 INFO [RS:2;jenkins-hbase4:39795] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3153ec14{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:11,662 INFO [RS:1;jenkins-hbase4:42175] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6b2e4cf8{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:11,662 INFO [RS:0;jenkins-hbase4:38927] server.AbstractConnector(383): Stopped ServerConnector@435a4063{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:11,662 INFO [RS:1;jenkins-hbase4:42175] server.AbstractConnector(383): Stopped ServerConnector@37fc8d1b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:11,662 INFO [RS:0;jenkins-hbase4:38927] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:11,662 INFO [RS:1;jenkins-hbase4:42175] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:11,663 INFO [RS:0;jenkins-hbase4:38927] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@37264fd4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:11,663 INFO [RS:1;jenkins-hbase4:42175] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@62436ad0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:11,663 INFO [RS:0;jenkins-hbase4:38927] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@46d1555c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:11,663 INFO [RS:1;jenkins-hbase4:42175] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5459ff94{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:11,663 INFO [RS:2;jenkins-hbase4:39795] server.AbstractConnector(383): Stopped ServerConnector@44b83e33{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:11,664 INFO [RS:2;jenkins-hbase4:39795] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:11,664 INFO [RS:0;jenkins-hbase4:38927] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:11:11,664 INFO [RS:0;jenkins-hbase4:38927] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:11:11,664 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:11:11,664 INFO [RS:0;jenkins-hbase4:38927] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:11:11,665 INFO [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(3305): Received CLOSE for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:11,664 INFO [RS:2;jenkins-hbase4:39795] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4134ac44{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:11,665 INFO [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:11,665 INFO [RS:2;jenkins-hbase4:39795] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@58f5ae2a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:11,666 INFO [RS:1;jenkins-hbase4:42175] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:11:11,666 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:11:11,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 674d6b4e3c5d6a4f0860e9c874b3e183, disabling compactions & flushes 2023-07-23 21:11:11,665 DEBUG [RS:0;jenkins-hbase4:38927] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5a1ef121 to 127.0.0.1:59847 2023-07-23 21:11:11,668 DEBUG [RS:0;jenkins-hbase4:38927] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:11,668 INFO [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 21:11:11,668 DEBUG [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1478): Online Regions={674d6b4e3c5d6a4f0860e9c874b3e183=hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.} 2023-07-23 21:11:11,667 INFO [RS:2;jenkins-hbase4:39795] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:11:11,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:11,668 INFO [RS:2;jenkins-hbase4:39795] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:11:11,668 INFO [RS:2;jenkins-hbase4:39795] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:11:11,669 INFO [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39795,1690146662307 2023-07-23 21:11:11,669 DEBUG [RS:2;jenkins-hbase4:39795] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x168f9b2b to 127.0.0.1:59847 2023-07-23 21:11:11,669 DEBUG [RS:2;jenkins-hbase4:39795] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:11,669 INFO [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39795,1690146662307; all regions closed. 2023-07-23 21:11:11,669 DEBUG [RS:2;jenkins-hbase4:39795] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-23 21:11:11,667 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:11:11,666 INFO [RS:1;jenkins-hbase4:42175] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:11:11,668 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:11,669 DEBUG [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1504): Waiting on 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:11,669 INFO [RS:1;jenkins-hbase4:42175] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:11:11,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. after waiting 0 ms 2023-07-23 21:11:11,669 INFO [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(3305): Received CLOSE for cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:11,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:11,670 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 674d6b4e3c5d6a4f0860e9c874b3e183 1/1 column families, dataSize=242 B heapSize=648 B 2023-07-23 21:11:11,670 INFO [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(3305): Received CLOSE for 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:11,670 INFO [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:11,670 DEBUG [RS:1;jenkins-hbase4:42175] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4d630367 to 127.0.0.1:59847 2023-07-23 21:11:11,670 DEBUG [RS:1;jenkins-hbase4:42175] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:11,670 INFO [RS:1;jenkins-hbase4:42175] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:11:11,670 INFO [RS:1;jenkins-hbase4:42175] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:11:11,670 INFO [RS:1;jenkins-hbase4:42175] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:11:11,670 INFO [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 21:11:11,675 INFO [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-23 21:11:11,675 DEBUG [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1478): Online Regions={cfdae6c1dde0d9be1f26f623634660ba=hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba., 1588230740=hbase:meta,,1.1588230740, 99f4bb247673f611dc82de993563e38b=hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.} 2023-07-23 21:11:11,676 DEBUG [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1504): Waiting on 1588230740, 99f4bb247673f611dc82de993563e38b, cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:11,677 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:11,677 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:11,682 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cfdae6c1dde0d9be1f26f623634660ba, disabling compactions & flushes 2023-07-23 21:11:11,683 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:11,683 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 21:11:11,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:11,683 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 21:11:11,683 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 21:11:11,683 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 21:11:11,684 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 21:11:11,684 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.05 KB heapSize=5.87 KB 2023-07-23 21:11:11,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. after waiting 0 ms 2023-07-23 21:11:11,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:11,692 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:11,693 DEBUG [RS:2;jenkins-hbase4:39795] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:11:11,694 INFO [RS:2;jenkins-hbase4:39795] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39795%2C1690146662307:(num 1690146663012) 2023-07-23 21:11:11,694 DEBUG [RS:2;jenkins-hbase4:39795] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:11,694 INFO [RS:2;jenkins-hbase4:39795] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:11,700 INFO [RS:2;jenkins-hbase4:39795] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:11,708 INFO [RS:2;jenkins-hbase4:39795] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:11:11,708 INFO [RS:2;jenkins-hbase4:39795] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:11:11,708 INFO [RS:2;jenkins-hbase4:39795] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:11:11,708 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:11,719 INFO [RS:2;jenkins-hbase4:39795] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39795 2023-07-23 21:11:11,723 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=14 2023-07-23 21:11:11,729 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:11,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cfdae6c1dde0d9be1f26f623634660ba: 2023-07-23 21:11:11,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:11,729 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39795,1690146662307 2023-07-23 21:11:11,729 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:11,729 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39795,1690146662307 2023-07-23 21:11:11,729 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:11,730 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39795,1690146662307 2023-07-23 21:11:11,730 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:11,730 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:11,730 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39795,1690146662307] 2023-07-23 21:11:11,731 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39795,1690146662307; numProcessing=1 2023-07-23 21:11:11,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 99f4bb247673f611dc82de993563e38b, disabling compactions & flushes 2023-07-23 21:11:11,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:11,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:11,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. after waiting 0 ms 2023-07-23 21:11:11,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:11,733 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39795,1690146662307 already deleted, retry=false 2023-07-23 21:11:11,733 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39795,1690146662307 expired; onlineServers=2 2023-07-23 21:11:11,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=242 B at sequenceid=80 (bloomFilter=true), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/.tmp/m/314ee21ed86d420b8896380bfa6f8703 2023-07-23 21:11:11,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:11:11,740 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:11,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 99f4bb247673f611dc82de993563e38b: 2023-07-23 21:11:11,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:11,761 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.97 KB at sequenceid=153 (bloomFilter=false), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/info/0319ae9753684fde96cc22ac6aee2994 2023-07-23 21:11:11,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/.tmp/m/314ee21ed86d420b8896380bfa6f8703 as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/314ee21ed86d420b8896380bfa6f8703 2023-07-23 21:11:11,771 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/314ee21ed86d420b8896380bfa6f8703, entries=2, sequenceid=80, filesize=5.0 K 2023-07-23 21:11:11,773 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~242 B/242, heapSize ~632 B/632, currentSize=0 B/0 for 674d6b4e3c5d6a4f0860e9c874b3e183 in 103ms, sequenceid=80, compaction requested=true 2023-07-23 21:11:11,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/recovered.edits/83.seqid, newMaxSeqId=83, maxSeqId=76 2023-07-23 21:11:11,793 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:11:11,793 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:11,793 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 674d6b4e3c5d6a4f0860e9c874b3e183: 2023-07-23 21:11:11,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:11,795 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=86 B at sequenceid=153 (bloomFilter=false), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/table/a91c64934a824ba3a000ed314e0f4688 2023-07-23 21:11:11,801 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/info/0319ae9753684fde96cc22ac6aee2994 as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/0319ae9753684fde96cc22ac6aee2994 2023-07-23 21:11:11,806 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/0319ae9753684fde96cc22ac6aee2994, entries=26, sequenceid=153, filesize=7.7 K 2023-07-23 21:11:11,807 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/table/a91c64934a824ba3a000ed314e0f4688 as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table/a91c64934a824ba3a000ed314e0f4688 2023-07-23 21:11:11,812 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table/a91c64934a824ba3a000ed314e0f4688, entries=2, sequenceid=153, filesize=4.7 K 2023-07-23 21:11:11,813 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.05 KB/3126, heapSize ~5.59 KB/5720, currentSize=0 B/0 for 1588230740 in 129ms, sequenceid=153, compaction requested=false 2023-07-23 21:11:11,824 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/recovered.edits/156.seqid, newMaxSeqId=156, maxSeqId=141 2023-07-23 21:11:11,824 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:11:11,825 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 21:11:11,825 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 21:11:11,825 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 21:11:11,843 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:11,843 INFO [RS:2;jenkins-hbase4:39795] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39795,1690146662307; zookeeper connection closed. 2023-07-23 21:11:11,843 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:39795-0x1019405901c0013, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:11,844 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@17b60da7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@17b60da7 2023-07-23 21:11:11,869 INFO [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38927,1690146661924; all regions closed. 2023-07-23 21:11:11,869 DEBUG [RS:0;jenkins-hbase4:38927] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-23 21:11:11,872 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-23 21:11:11,873 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-23 21:11:11,876 INFO [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42175,1690146662090; all regions closed. 2023-07-23 21:11:11,876 DEBUG [RS:1;jenkins-hbase4:42175] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-23 21:11:11,878 DEBUG [RS:0;jenkins-hbase4:38927] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:11:11,879 INFO [RS:0;jenkins-hbase4:38927] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38927%2C1690146661924:(num 1690146663014) 2023-07-23 21:11:11,879 DEBUG [RS:0;jenkins-hbase4:38927] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:11,879 INFO [RS:0;jenkins-hbase4:38927] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:11,879 INFO [RS:0;jenkins-hbase4:38927] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:11,879 INFO [RS:0;jenkins-hbase4:38927] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:11:11,879 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:11,879 INFO [RS:0;jenkins-hbase4:38927] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:11:11,880 INFO [RS:0;jenkins-hbase4:38927] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:11:11,880 INFO [RS:0;jenkins-hbase4:38927] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38927 2023-07-23 21:11:11,885 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:11,885 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:11,885 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38927,1690146661924 2023-07-23 21:11:11,885 DEBUG [RS:1;jenkins-hbase4:42175] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:11:11,885 INFO [RS:1;jenkins-hbase4:42175] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42175%2C1690146662090.meta:.meta(num 1690146663059) 2023-07-23 21:11:11,885 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38927,1690146661924] 2023-07-23 21:11:11,885 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38927,1690146661924; numProcessing=2 2023-07-23 21:11:11,888 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38927,1690146661924 already deleted, retry=false 2023-07-23 21:11:11,888 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38927,1690146661924 expired; onlineServers=1 2023-07-23 21:11:11,892 DEBUG [RS:1;jenkins-hbase4:42175] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:11:11,892 INFO [RS:1;jenkins-hbase4:42175] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42175%2C1690146662090:(num 1690146663014) 2023-07-23 21:11:11,892 DEBUG [RS:1;jenkins-hbase4:42175] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:11,892 INFO [RS:1;jenkins-hbase4:42175] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:11,892 INFO [RS:1;jenkins-hbase4:42175] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:11,892 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:11,893 INFO [RS:1;jenkins-hbase4:42175] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42175 2023-07-23 21:11:11,897 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42175,1690146662090 2023-07-23 21:11:11,897 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:11,898 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42175,1690146662090] 2023-07-23 21:11:11,898 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42175,1690146662090; numProcessing=3 2023-07-23 21:11:11,899 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42175,1690146662090 already deleted, retry=false 2023-07-23 21:11:11,899 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42175,1690146662090 expired; onlineServers=0 2023-07-23 21:11:11,900 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38577,1690146661744' ***** 2023-07-23 21:11:11,900 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 21:11:11,900 DEBUG [M:0;jenkins-hbase4:38577] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25ba6b3b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:11,900 INFO [M:0;jenkins-hbase4:38577] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:11,902 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:11,902 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:11,902 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:11,902 INFO [M:0;jenkins-hbase4:38577] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2505c556{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 21:11:11,903 INFO [M:0;jenkins-hbase4:38577] server.AbstractConnector(383): Stopped ServerConnector@7071e77c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:11,903 INFO [M:0;jenkins-hbase4:38577] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:11,903 INFO [M:0;jenkins-hbase4:38577] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5e3418f0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:11,903 INFO [M:0;jenkins-hbase4:38577] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@67c6a36d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:11,903 INFO [M:0;jenkins-hbase4:38577] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38577,1690146661744 2023-07-23 21:11:11,904 INFO [M:0;jenkins-hbase4:38577] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38577,1690146661744; all regions closed. 2023-07-23 21:11:11,904 DEBUG [M:0;jenkins-hbase4:38577] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:11,904 INFO [M:0;jenkins-hbase4:38577] master.HMaster(1491): Stopping master jetty server 2023-07-23 21:11:11,904 INFO [M:0;jenkins-hbase4:38577] server.AbstractConnector(383): Stopped ServerConnector@264618d7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:11,905 DEBUG [M:0;jenkins-hbase4:38577] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 21:11:11,905 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 21:11:11,905 DEBUG [M:0;jenkins-hbase4:38577] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 21:11:11,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146662730] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146662730,5,FailOnTimeoutGroup] 2023-07-23 21:11:11,906 INFO [M:0;jenkins-hbase4:38577] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 21:11:11,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146662726] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146662726,5,FailOnTimeoutGroup] 2023-07-23 21:11:11,906 INFO [M:0;jenkins-hbase4:38577] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 21:11:11,907 INFO [M:0;jenkins-hbase4:38577] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:11,907 DEBUG [M:0;jenkins-hbase4:38577] master.HMaster(1512): Stopping service threads 2023-07-23 21:11:11,907 INFO [M:0;jenkins-hbase4:38577] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 21:11:11,907 ERROR [M:0;jenkins-hbase4:38577] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-23 21:11:11,907 INFO [M:0;jenkins-hbase4:38577] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 21:11:11,907 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 21:11:11,908 DEBUG [M:0;jenkins-hbase4:38577] zookeeper.ZKUtil(398): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 21:11:11,908 WARN [M:0;jenkins-hbase4:38577] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 21:11:11,908 INFO [M:0;jenkins-hbase4:38577] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 21:11:11,909 INFO [M:0;jenkins-hbase4:38577] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 21:11:11,909 DEBUG [M:0;jenkins-hbase4:38577] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 21:11:11,909 INFO [M:0;jenkins-hbase4:38577] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:11,909 DEBUG [M:0;jenkins-hbase4:38577] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:11,909 DEBUG [M:0;jenkins-hbase4:38577] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 21:11:11,910 DEBUG [M:0;jenkins-hbase4:38577] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:11,910 INFO [M:0;jenkins-hbase4:38577] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=45.28 KB heapSize=54.86 KB 2023-07-23 21:11:11,926 INFO [M:0;jenkins-hbase4:38577] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=45.28 KB at sequenceid=910 (bloomFilter=true), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e0fa6eac44f74741ac599e744ee3f368 2023-07-23 21:11:11,932 DEBUG [M:0;jenkins-hbase4:38577] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e0fa6eac44f74741ac599e744ee3f368 as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e0fa6eac44f74741ac599e744ee3f368 2023-07-23 21:11:11,937 INFO [M:0;jenkins-hbase4:38577] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e0fa6eac44f74741ac599e744ee3f368, entries=13, sequenceid=910, filesize=7.2 K 2023-07-23 21:11:11,938 INFO [M:0;jenkins-hbase4:38577] regionserver.HRegion(2948): Finished flush of dataSize ~45.28 KB/46367, heapSize ~54.84 KB/56160, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=910, compaction requested=false 2023-07-23 21:11:11,941 INFO [M:0;jenkins-hbase4:38577] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:11,941 DEBUG [M:0;jenkins-hbase4:38577] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:11:11,947 INFO [M:0;jenkins-hbase4:38577] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 21:11:11,947 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:11,948 INFO [M:0;jenkins-hbase4:38577] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38577 2023-07-23 21:11:11,950 DEBUG [M:0;jenkins-hbase4:38577] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,38577,1690146661744 already deleted, retry=false 2023-07-23 21:11:12,344 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:12,344 INFO [M:0;jenkins-hbase4:38577] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38577,1690146661744; zookeeper connection closed. 2023-07-23 21:11:12,344 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:38577-0x1019405901c0010, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:12,445 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:12,445 INFO [RS:1;jenkins-hbase4:42175] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42175,1690146662090; zookeeper connection closed. 2023-07-23 21:11:12,445 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:42175-0x1019405901c0012, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:12,445 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@313b3bf4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@313b3bf4 2023-07-23 21:11:12,545 INFO [RS:0;jenkins-hbase4:38927] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38927,1690146661924; zookeeper connection closed. 2023-07-23 21:11:12,545 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:12,545 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38927-0x1019405901c0011, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:12,545 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@60593ba9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@60593ba9 2023-07-23 21:11:12,545 INFO [Listener at localhost/38995] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-23 21:11:12,545 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-23 21:11:14,176 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 21:11:14,547 INFO [Listener at localhost/38995] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:14,548 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:14,548 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:14,548 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:14,548 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:14,548 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:14,549 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:14,553 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40555 2023-07-23 21:11:14,553 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:14,555 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:14,556 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40555 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:11:14,567 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:405550x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:14,571 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40555-0x1019405901c001c connected 2023-07-23 21:11:14,581 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:14,581 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:14,582 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:14,596 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40555 2023-07-23 21:11:14,596 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40555 2023-07-23 21:11:14,596 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40555 2023-07-23 21:11:14,610 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40555 2023-07-23 21:11:14,612 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40555 2023-07-23 21:11:14,615 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:14,615 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:14,615 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:14,615 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 21:11:14,616 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:14,616 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:14,616 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:14,616 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 36633 2023-07-23 21:11:14,616 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:14,621 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:14,621 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@177329fd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:14,621 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:14,622 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@78cc12d1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:14,781 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:14,782 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:14,782 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:14,782 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:11:14,783 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:14,784 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@29b4e5f5{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-36633-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3189403815072061303/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 21:11:14,785 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@731c75e{HTTP/1.1, (http/1.1)}{0.0.0.0:36633} 2023-07-23 21:11:14,785 INFO [Listener at localhost/38995] server.Server(415): Started @40585ms 2023-07-23 21:11:14,785 INFO [Listener at localhost/38995] master.HMaster(444): hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914, hbase.cluster.distributed=false 2023-07-23 21:11:14,786 DEBUG [pool-517-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-23 21:11:14,799 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:14,799 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:14,799 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:14,799 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:14,799 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:14,799 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:14,799 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:14,800 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36881 2023-07-23 21:11:14,800 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:11:14,802 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:11:14,803 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:14,804 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:14,805 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36881 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:11:14,809 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:368810x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:14,810 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:368810x0, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:14,811 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36881-0x1019405901c001d connected 2023-07-23 21:11:14,811 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:14,811 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:14,812 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36881 2023-07-23 21:11:14,812 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36881 2023-07-23 21:11:14,812 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36881 2023-07-23 21:11:14,814 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36881 2023-07-23 21:11:14,814 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36881 2023-07-23 21:11:14,816 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:14,816 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:14,816 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:14,817 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:11:14,817 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:14,817 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:14,817 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:14,818 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 39325 2023-07-23 21:11:14,818 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:14,823 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:14,823 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@54ad8272{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:14,823 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:14,824 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@333fd2bd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:14,956 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:14,956 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:14,957 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:14,957 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:11:14,958 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:14,958 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@dd9df98{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-39325-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4725240389936095293/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:14,961 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@318fe04f{HTTP/1.1, (http/1.1)}{0.0.0.0:39325} 2023-07-23 21:11:14,961 INFO [Listener at localhost/38995] server.Server(415): Started @40761ms 2023-07-23 21:11:14,973 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:14,973 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:14,973 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:14,974 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:14,974 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:14,974 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:14,974 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:14,975 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40573 2023-07-23 21:11:14,975 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:11:14,977 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:11:14,978 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:14,979 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:14,981 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40573 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:11:14,985 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:405730x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:14,986 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:405730x0, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:14,987 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40573-0x1019405901c001e connected 2023-07-23 21:11:14,987 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:14,988 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:14,988 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40573 2023-07-23 21:11:14,988 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40573 2023-07-23 21:11:14,990 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40573 2023-07-23 21:11:14,994 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40573 2023-07-23 21:11:14,994 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40573 2023-07-23 21:11:14,996 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:14,997 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:14,997 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:14,997 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:11:14,998 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:14,998 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:14,998 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:14,998 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 39505 2023-07-23 21:11:14,999 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:15,003 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:15,003 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6ff19d03{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:15,003 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:15,003 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e02a8be{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:15,126 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:15,127 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:15,128 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:15,128 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:11:15,129 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:15,130 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1990cee4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-39505-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4912634636783826501/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:15,132 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@727c0667{HTTP/1.1, (http/1.1)}{0.0.0.0:39505} 2023-07-23 21:11:15,132 INFO [Listener at localhost/38995] server.Server(415): Started @40932ms 2023-07-23 21:11:15,147 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:15,147 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:15,147 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:15,148 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:15,148 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:15,148 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:15,148 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:15,149 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45513 2023-07-23 21:11:15,149 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:11:15,150 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:11:15,151 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:15,152 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:15,153 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45513 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:11:15,156 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:455130x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:15,158 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:455130x0, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:15,158 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45513-0x1019405901c001f connected 2023-07-23 21:11:15,159 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:15,159 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:15,160 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45513 2023-07-23 21:11:15,160 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45513 2023-07-23 21:11:15,160 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45513 2023-07-23 21:11:15,160 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45513 2023-07-23 21:11:15,161 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45513 2023-07-23 21:11:15,163 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:15,163 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:15,163 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:15,163 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:11:15,164 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:15,164 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:15,164 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:15,164 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 46715 2023-07-23 21:11:15,164 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:15,166 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:15,167 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@9d64b09{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:15,167 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:15,167 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@831f13d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:15,298 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:15,298 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:15,298 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:15,299 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:11:15,299 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:15,300 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7bb9c15d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-46715-hbase-server-2_4_18-SNAPSHOT_jar-_-any-945386005125645617/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:15,302 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@7c81bac{HTTP/1.1, (http/1.1)}{0.0.0.0:46715} 2023-07-23 21:11:15,302 INFO [Listener at localhost/38995] server.Server(415): Started @41102ms 2023-07-23 21:11:15,308 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:15,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@34f59e01{HTTP/1.1, (http/1.1)}{0.0.0.0:38313} 2023-07-23 21:11:15,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @41113ms 2023-07-23 21:11:15,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:15,314 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 21:11:15,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:15,316 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:15,316 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:15,316 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:15,316 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:15,317 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:15,318 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:11:15,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40555,1690146674547 from backup master directory 2023-07-23 21:11:15,320 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:11:15,322 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:15,322 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 21:11:15,322 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:15,322 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:15,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:15,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x452f343f to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:15,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7930c45b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:15,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:11:15,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 21:11:15,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:15,396 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,38577,1690146661744 to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,38577,1690146661744-dead as it is dead 2023-07-23 21:11:15,397 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,38577,1690146661744-dead/jenkins-hbase4.apache.org%2C38577%2C1690146661744.1690146662553 2023-07-23 21:11:15,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,38577,1690146661744-dead/jenkins-hbase4.apache.org%2C38577%2C1690146661744.1690146662553 after 1ms 2023-07-23 21:11:15,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,38577,1690146661744-dead/jenkins-hbase4.apache.org%2C38577%2C1690146661744.1690146662553 to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C38577%2C1690146661744.1690146662553 2023-07-23 21:11:15,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,38577,1690146661744-dead 2023-07-23 21:11:15,399 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:15,401 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40555%2C1690146674547, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,40555,1690146674547, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/oldWALs, maxLogs=10 2023-07-23 21:11:15,417 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:11:15,420 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:11:15,420 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:11:15,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/WALs/jenkins-hbase4.apache.org,40555,1690146674547/jenkins-hbase4.apache.org%2C40555%2C1690146674547.1690146675401 2023-07-23 21:11:15,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK]] 2023-07-23 21:11:15,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:15,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:15,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:15,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:15,429 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:15,431 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 21:11:15,431 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 21:11:15,438 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6e0eb0a7955c4e2aa2b5d2d2df7904ef 2023-07-23 21:11:15,444 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e0fa6eac44f74741ac599e744ee3f368 2023-07-23 21:11:15,444 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:15,444 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-23 21:11:15,445 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C38577%2C1690146661744.1690146662553 2023-07-23 21:11:15,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 128, firstSequenceIdInLog=800, maxSequenceIdInLog=912, path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C38577%2C1690146661744.1690146662553 2023-07-23 21:11:15,452 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C38577%2C1690146661744.1690146662553 2023-07-23 21:11:15,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:15,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/912.seqid, newMaxSeqId=912, maxSeqId=798 2023-07-23 21:11:15,459 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=913; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11788511040, jitterRate=0.09789064526557922}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:11:15,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:11:15,459 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 21:11:15,461 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 21:11:15,461 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 21:11:15,461 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 21:11:15,461 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-23 21:11:15,470 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-23 21:11:15,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-23 21:11:15,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-23 21:11:15,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-23 21:11:15,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-23 21:11:15,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, REOPEN/MOVE 2023-07-23 21:11:15,472 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=15, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,36963,1690146641995, splitWal=true, meta=false 2023-07-23 21:11:15,472 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=16, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-23 21:11:15,472 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=17, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:11:15,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=20, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:11:15,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=23, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-23 21:11:15,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=24, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:11:15,474 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=45, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:11:15,474 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=66, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-23 21:11:15,474 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=67, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-23 21:11:15,474 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=68, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:11:15,474 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=71, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:11:15,474 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=74, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-23 21:11:15,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=75, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:11:15,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=76, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:11:15,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=79, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:11:15,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=82, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-23 21:11:15,476 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=83, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:11:15,476 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=86, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-23 21:11:15,476 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=87, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-23 21:11:15,477 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=88, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690146655169 type: FLUSH version: 2 ttl: 0 ) 2023-07-23 21:11:15,477 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=91, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:11:15,477 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=94, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-23 21:11:15,477 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=95, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-23 21:11:15,477 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=98, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-23 21:11:15,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=99, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-23 21:11:15,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=100, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:11:15,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=101, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:11:15,479 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=104, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:11:15,479 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-23 21:11:15,479 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=108, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-23 21:11:15,479 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=109, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,42335,1690146647320, splitWal=true, meta=false 2023-07-23 21:11:15,479 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=110, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,46485,1690146642211, splitWal=true, meta=false 2023-07-23 21:11:15,480 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=111, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,42727,1690146641774, splitWal=true, meta=true 2023-07-23 21:11:15,481 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=112, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,45637,1690146645550, splitWal=true, meta=false 2023-07-23 21:11:15,481 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=119, state=SUCCESS; CreateTableProcedure table=hbase:quota 2023-07-23 21:11:15,481 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 19 msec 2023-07-23 21:11:15,481 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 21:11:15,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-23 21:11:15,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase4.apache.org,42175,1690146662090, table=hbase:meta, region=1588230740 2023-07-23 21:11:15,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 3 possibly 'live' servers, and 0 'splitting'. 2023-07-23 21:11:15,486 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38927,1690146661924 already deleted, retry=false 2023-07-23 21:11:15,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,38927,1690146661924 on jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:15,487 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=122, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,38927,1690146661924, splitWal=true, meta=false 2023-07-23 21:11:15,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=122 for jenkins-hbase4.apache.org,38927,1690146661924 (carryingMeta=false) jenkins-hbase4.apache.org,38927,1690146661924/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@4632d328[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-23 21:11:15,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42175,1690146662090 already deleted, retry=false 2023-07-23 21:11:15,490 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,42175,1690146662090 on jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:15,491 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,42175,1690146662090, splitWal=true, meta=true 2023-07-23 21:11:15,491 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=123 for jenkins-hbase4.apache.org,42175,1690146662090 (carryingMeta=true) jenkins-hbase4.apache.org,42175,1690146662090/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@46a21d5c[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-23 21:11:15,492 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39795,1690146662307 already deleted, retry=false 2023-07-23 21:11:15,492 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,39795,1690146662307 on jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:15,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=124, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,39795,1690146662307, splitWal=true, meta=false 2023-07-23 21:11:15,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=124 for jenkins-hbase4.apache.org,39795,1690146662307 (carryingMeta=false) jenkins-hbase4.apache.org,39795,1690146662307/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@4ab0b490[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-23 21:11:15,495 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-23 21:11:15,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 21:11:15,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 21:11:15,497 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 21:11:15,497 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 21:11:15,498 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 21:11:15,500 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:15,500 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:15,500 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:15,500 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:15,500 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:15,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40555,1690146674547, sessionid=0x1019405901c001c, setting cluster-up flag (Was=false) 2023-07-23 21:11:15,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 21:11:15,506 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:15,513 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 21:11:15,514 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:15,515 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/.hbase-snapshot/.tmp 2023-07-23 21:11:15,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-23 21:11:15,524 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-23 21:11:15,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-23 21:11:15,525 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-23 21:11:15,526 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:15,527 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-23 21:11:15,531 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:15,531 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:42175 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:42175 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:11:15,533 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:42175 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:42175 2023-07-23 21:11:15,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 21:11:15,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 21:11:15,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 21:11:15,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 21:11:15,547 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:11:15,547 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:11:15,547 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:11:15,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:11:15,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 21:11:15,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:15,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,550 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690146705550 2023-07-23 21:11:15,550 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 21:11:15,550 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 21:11:15,550 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 21:11:15,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 21:11:15,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 21:11:15,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 21:11:15,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,551 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42175,1690146662090; numProcessing=1 2023-07-23 21:11:15,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 21:11:15,552 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=123, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,42175,1690146662090, splitWal=true, meta=true 2023-07-23 21:11:15,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 21:11:15,552 DEBUG [PEWorker-4] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39795,1690146662307; numProcessing=2 2023-07-23 21:11:15,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 21:11:15,552 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38927,1690146661924; numProcessing=3 2023-07-23 21:11:15,552 INFO [PEWorker-4] procedure.ServerCrashProcedure(161): Start pid=124, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,39795,1690146662307, splitWal=true, meta=false 2023-07-23 21:11:15,552 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=122, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,38927,1690146661924, splitWal=true, meta=false 2023-07-23 21:11:15,553 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=123, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,42175,1690146662090, splitWal=true, meta=true, isMeta: true 2023-07-23 21:11:15,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 21:11:15,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 21:11:15,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146675555,5,FailOnTimeoutGroup] 2023-07-23 21:11:15,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146675555,5,FailOnTimeoutGroup] 2023-07-23 21:11:15,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 21:11:15,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690146675556, completionTime=-1 2023-07-23 21:11:15,556 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-23 21:11:15,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-23 21:11:15,556 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42175,1690146662090-splitting 2023-07-23 21:11:15,557 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42175,1690146662090-splitting dir is empty, no logs to split. 2023-07-23 21:11:15,557 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,42175,1690146662090 WAL count=0, meta=true 2023-07-23 21:11:15,559 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42175,1690146662090-splitting dir is empty, no logs to split. 2023-07-23 21:11:15,559 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,42175,1690146662090 WAL count=0, meta=true 2023-07-23 21:11:15,559 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,42175,1690146662090 WAL splitting is done? wals=0, meta=true 2023-07-23 21:11:15,560 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 21:11:15,561 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=125, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 21:11:15,562 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=125, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-23 21:11:15,605 INFO [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:11:15,605 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:11:15,605 INFO [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:11:15,605 DEBUG [RS:2;jenkins-hbase4:45513] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:11:15,605 DEBUG [RS:1;jenkins-hbase4:40573] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:11:15,605 DEBUG [RS:0;jenkins-hbase4:36881] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:11:15,608 DEBUG [RS:2;jenkins-hbase4:45513] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:11:15,608 DEBUG [RS:2;jenkins-hbase4:45513] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:11:15,608 DEBUG [RS:1;jenkins-hbase4:40573] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:11:15,608 DEBUG [RS:1;jenkins-hbase4:40573] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:11:15,608 DEBUG [RS:0;jenkins-hbase4:36881] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:11:15,608 DEBUG [RS:0;jenkins-hbase4:36881] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:11:15,611 DEBUG [RS:2;jenkins-hbase4:45513] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:11:15,612 DEBUG [RS:1;jenkins-hbase4:40573] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:11:15,612 DEBUG [RS:0;jenkins-hbase4:36881] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:11:15,614 DEBUG [RS:2;jenkins-hbase4:45513] zookeeper.ReadOnlyZKClient(139): Connect 0x7de674e9 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:15,614 DEBUG [RS:1;jenkins-hbase4:40573] zookeeper.ReadOnlyZKClient(139): Connect 0x38b5f8c1 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:15,614 DEBUG [RS:0;jenkins-hbase4:36881] zookeeper.ReadOnlyZKClient(139): Connect 0x09c81b8a to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:15,625 DEBUG [RS:2;jenkins-hbase4:45513] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@73219525, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:15,625 DEBUG [RS:0;jenkins-hbase4:36881] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@70fe4c3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:15,625 DEBUG [RS:2;jenkins-hbase4:45513] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@771e1f99, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:15,625 DEBUG [RS:0;jenkins-hbase4:36881] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33070fa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:15,626 DEBUG [RS:1;jenkins-hbase4:40573] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@367211a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:15,626 DEBUG [RS:1;jenkins-hbase4:40573] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@324d93b1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:15,634 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:42175 this server is in the failed servers list 2023-07-23 21:11:15,635 DEBUG [RS:2;jenkins-hbase4:45513] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:45513 2023-07-23 21:11:15,635 INFO [RS:2;jenkins-hbase4:45513] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:11:15,636 INFO [RS:2;jenkins-hbase4:45513] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:11:15,636 DEBUG [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:11:15,636 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40555,1690146674547 with isa=jenkins-hbase4.apache.org/172.31.14.131:45513, startcode=1690146675147 2023-07-23 21:11:15,636 DEBUG [RS:2;jenkins-hbase4:45513] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:11:15,638 DEBUG [RS:0;jenkins-hbase4:36881] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:36881 2023-07-23 21:11:15,638 INFO [RS:0;jenkins-hbase4:36881] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:11:15,638 INFO [RS:0;jenkins-hbase4:36881] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:11:15,638 DEBUG [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:11:15,638 DEBUG [RS:1;jenkins-hbase4:40573] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:40573 2023-07-23 21:11:15,638 INFO [RS:1;jenkins-hbase4:40573] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:11:15,638 INFO [RS:1;jenkins-hbase4:40573] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:11:15,638 DEBUG [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:11:15,638 INFO [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40555,1690146674547 with isa=jenkins-hbase4.apache.org/172.31.14.131:36881, startcode=1690146674798 2023-07-23 21:11:15,638 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46085, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:11:15,638 DEBUG [RS:0;jenkins-hbase4:36881] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:11:15,639 INFO [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40555,1690146674547 with isa=jenkins-hbase4.apache.org/172.31.14.131:40573, startcode=1690146674972 2023-07-23 21:11:15,639 DEBUG [RS:1;jenkins-hbase4:40573] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:11:15,643 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40555] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:15,644 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:15,644 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57773, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:11:15,644 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38951, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:11:15,644 DEBUG [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:11:15,645 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40555] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:15,645 DEBUG [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:11:15,645 DEBUG [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36633 2023-07-23 21:11:15,645 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40555] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:15,645 DEBUG [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:11:15,646 DEBUG [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:11:15,646 DEBUG [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36633 2023-07-23 21:11:15,646 DEBUG [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:11:15,646 DEBUG [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:11:15,646 DEBUG [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36633 2023-07-23 21:11:15,649 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:15,650 DEBUG [RS:2;jenkins-hbase4:45513] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:15,650 WARN [RS:2;jenkins-hbase4:45513] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:15,650 INFO [RS:2;jenkins-hbase4:45513] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:15,650 DEBUG [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:15,658 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 21:11:15,658 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:15,658 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 21:11:15,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=102ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-23 21:11:15,660 DEBUG [RS:1;jenkins-hbase4:40573] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:15,660 DEBUG [RS:0;jenkins-hbase4:36881] zookeeper.ZKUtil(162): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:15,660 WARN [RS:1;jenkins-hbase4:40573] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:15,660 INFO [RS:1;jenkins-hbase4:40573] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:15,660 DEBUG [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:15,660 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45513,1690146675147] 2023-07-23 21:11:15,660 WARN [RS:0;jenkins-hbase4:36881] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:15,661 INFO [RS:0;jenkins-hbase4:36881] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:15,661 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36881,1690146674798] 2023-07-23 21:11:15,661 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40573,1690146674972] 2023-07-23 21:11:15,662 DEBUG [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:15,675 DEBUG [RS:1;jenkins-hbase4:40573] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:15,676 DEBUG [RS:2;jenkins-hbase4:45513] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:15,676 DEBUG [RS:1;jenkins-hbase4:40573] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:15,676 DEBUG [RS:0;jenkins-hbase4:36881] zookeeper.ZKUtil(162): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:15,676 DEBUG [RS:2;jenkins-hbase4:45513] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:15,676 DEBUG [RS:1;jenkins-hbase4:40573] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:15,676 DEBUG [RS:2;jenkins-hbase4:45513] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:15,676 DEBUG [RS:0;jenkins-hbase4:36881] zookeeper.ZKUtil(162): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:15,677 DEBUG [RS:0;jenkins-hbase4:36881] zookeeper.ZKUtil(162): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:15,677 DEBUG [RS:1;jenkins-hbase4:40573] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:11:15,677 DEBUG [RS:2;jenkins-hbase4:45513] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:11:15,677 INFO [RS:1;jenkins-hbase4:40573] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:11:15,677 INFO [RS:2;jenkins-hbase4:45513] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:11:15,682 INFO [RS:1;jenkins-hbase4:40573] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:11:15,683 INFO [RS:1;jenkins-hbase4:40573] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:11:15,683 INFO [RS:1;jenkins-hbase4:40573] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,683 INFO [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:11:15,689 INFO [RS:2;jenkins-hbase4:45513] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:11:15,689 DEBUG [RS:0;jenkins-hbase4:36881] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:11:15,689 INFO [RS:0;jenkins-hbase4:36881] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:11:15,690 INFO [RS:1;jenkins-hbase4:40573] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,691 INFO [RS:2;jenkins-hbase4:45513] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:11:15,693 INFO [RS:2;jenkins-hbase4:45513] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,693 DEBUG [RS:1;jenkins-hbase4:40573] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,693 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:11:15,694 DEBUG [RS:1;jenkins-hbase4:40573] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,694 DEBUG [RS:1;jenkins-hbase4:40573] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,694 DEBUG [RS:1;jenkins-hbase4:40573] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,694 DEBUG [RS:1;jenkins-hbase4:40573] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,694 DEBUG [RS:1;jenkins-hbase4:40573] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:15,694 DEBUG [RS:1;jenkins-hbase4:40573] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,695 DEBUG [RS:1;jenkins-hbase4:40573] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,695 DEBUG [RS:1;jenkins-hbase4:40573] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,695 DEBUG [RS:1;jenkins-hbase4:40573] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,695 INFO [RS:0;jenkins-hbase4:36881] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:11:15,695 INFO [RS:1;jenkins-hbase4:40573] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,696 INFO [RS:1;jenkins-hbase4:40573] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,696 INFO [RS:1;jenkins-hbase4:40573] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,699 INFO [RS:0;jenkins-hbase4:36881] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:11:15,699 INFO [RS:0;jenkins-hbase4:36881] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,700 INFO [RS:2;jenkins-hbase4:45513] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,700 INFO [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:11:15,701 DEBUG [RS:2;jenkins-hbase4:45513] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,702 DEBUG [RS:2;jenkins-hbase4:45513] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,702 DEBUG [RS:2;jenkins-hbase4:45513] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,702 DEBUG [RS:2;jenkins-hbase4:45513] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,702 DEBUG [RS:2;jenkins-hbase4:45513] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,702 DEBUG [RS:2;jenkins-hbase4:45513] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:15,702 DEBUG [RS:2;jenkins-hbase4:45513] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,702 DEBUG [RS:2;jenkins-hbase4:45513] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,702 DEBUG [RS:2;jenkins-hbase4:45513] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,702 DEBUG [RS:2;jenkins-hbase4:45513] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,702 INFO [RS:0;jenkins-hbase4:36881] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,702 INFO [RS:2;jenkins-hbase4:45513] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,702 INFO [RS:2;jenkins-hbase4:45513] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,703 INFO [RS:2;jenkins-hbase4:45513] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,704 DEBUG [RS:0;jenkins-hbase4:36881] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,705 DEBUG [RS:0;jenkins-hbase4:36881] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,705 DEBUG [RS:0;jenkins-hbase4:36881] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,705 DEBUG [RS:0;jenkins-hbase4:36881] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,705 DEBUG [RS:0;jenkins-hbase4:36881] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,705 DEBUG [RS:0;jenkins-hbase4:36881] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:15,705 DEBUG [RS:0;jenkins-hbase4:36881] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,705 DEBUG [RS:0;jenkins-hbase4:36881] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,705 DEBUG [RS:0;jenkins-hbase4:36881] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,705 DEBUG [RS:0;jenkins-hbase4:36881] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:15,711 INFO [RS:0;jenkins-hbase4:36881] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,711 INFO [RS:0;jenkins-hbase4:36881] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,711 INFO [RS:0;jenkins-hbase4:36881] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,712 DEBUG [jenkins-hbase4:40555] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 21:11:15,713 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:11:15,713 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:11:15,713 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:11:15,713 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:11:15,713 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:11:15,716 INFO [RS:2;jenkins-hbase4:45513] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:11:15,716 INFO [RS:1;jenkins-hbase4:40573] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:11:15,716 INFO [RS:2;jenkins-hbase4:45513] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45513,1690146675147-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,716 INFO [RS:1;jenkins-hbase4:40573] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40573,1690146674972-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,718 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40573,1690146674972, state=OPENING 2023-07-23 21:11:15,720 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 21:11:15,720 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:11:15,720 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=126, ppid=125, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40573,1690146674972}] 2023-07-23 21:11:15,730 INFO [RS:0;jenkins-hbase4:36881] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:11:15,730 INFO [RS:0;jenkins-hbase4:36881] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36881,1690146674798-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:15,732 INFO [RS:1;jenkins-hbase4:40573] regionserver.Replication(203): jenkins-hbase4.apache.org,40573,1690146674972 started 2023-07-23 21:11:15,732 INFO [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40573,1690146674972, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40573, sessionid=0x1019405901c001e 2023-07-23 21:11:15,732 DEBUG [RS:1;jenkins-hbase4:40573] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:11:15,732 DEBUG [RS:1;jenkins-hbase4:40573] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:15,732 DEBUG [RS:1;jenkins-hbase4:40573] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40573,1690146674972' 2023-07-23 21:11:15,732 DEBUG [RS:1;jenkins-hbase4:40573] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:11:15,732 DEBUG [RS:1;jenkins-hbase4:40573] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:11:15,733 DEBUG [RS:1;jenkins-hbase4:40573] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:11:15,733 DEBUG [RS:1;jenkins-hbase4:40573] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:11:15,733 DEBUG [RS:1;jenkins-hbase4:40573] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:15,733 DEBUG [RS:1;jenkins-hbase4:40573] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40573,1690146674972' 2023-07-23 21:11:15,733 DEBUG [RS:1;jenkins-hbase4:40573] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:11:15,733 DEBUG [RS:1;jenkins-hbase4:40573] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:11:15,733 DEBUG [RS:1;jenkins-hbase4:40573] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:11:15,733 INFO [RS:1;jenkins-hbase4:40573] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:11:15,733 INFO [RS:1;jenkins-hbase4:40573] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:11:15,735 INFO [RS:2;jenkins-hbase4:45513] regionserver.Replication(203): jenkins-hbase4.apache.org,45513,1690146675147 started 2023-07-23 21:11:15,735 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45513,1690146675147, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45513, sessionid=0x1019405901c001f 2023-07-23 21:11:15,735 DEBUG [RS:2;jenkins-hbase4:45513] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:11:15,736 DEBUG [RS:2;jenkins-hbase4:45513] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:15,736 DEBUG [RS:2;jenkins-hbase4:45513] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45513,1690146675147' 2023-07-23 21:11:15,736 DEBUG [RS:2;jenkins-hbase4:45513] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:11:15,736 DEBUG [RS:2;jenkins-hbase4:45513] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:11:15,736 DEBUG [RS:2;jenkins-hbase4:45513] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:11:15,736 DEBUG [RS:2;jenkins-hbase4:45513] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:11:15,736 DEBUG [RS:2;jenkins-hbase4:45513] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:15,736 DEBUG [RS:2;jenkins-hbase4:45513] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45513,1690146675147' 2023-07-23 21:11:15,737 DEBUG [RS:2;jenkins-hbase4:45513] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:11:15,737 DEBUG [RS:2;jenkins-hbase4:45513] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:11:15,737 DEBUG [RS:2;jenkins-hbase4:45513] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:11:15,737 INFO [RS:2;jenkins-hbase4:45513] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:11:15,737 INFO [RS:2;jenkins-hbase4:45513] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:11:15,743 INFO [RS:0;jenkins-hbase4:36881] regionserver.Replication(203): jenkins-hbase4.apache.org,36881,1690146674798 started 2023-07-23 21:11:15,743 INFO [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36881,1690146674798, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36881, sessionid=0x1019405901c001d 2023-07-23 21:11:15,743 DEBUG [RS:0;jenkins-hbase4:36881] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:11:15,743 DEBUG [RS:0;jenkins-hbase4:36881] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:15,743 DEBUG [RS:0;jenkins-hbase4:36881] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36881,1690146674798' 2023-07-23 21:11:15,743 DEBUG [RS:0;jenkins-hbase4:36881] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:11:15,744 DEBUG [RS:0;jenkins-hbase4:36881] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:11:15,744 DEBUG [RS:0;jenkins-hbase4:36881] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:11:15,744 DEBUG [RS:0;jenkins-hbase4:36881] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:11:15,744 DEBUG [RS:0;jenkins-hbase4:36881] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:15,744 DEBUG [RS:0;jenkins-hbase4:36881] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36881,1690146674798' 2023-07-23 21:11:15,744 DEBUG [RS:0;jenkins-hbase4:36881] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:11:15,744 DEBUG [RS:0;jenkins-hbase4:36881] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:11:15,745 DEBUG [RS:0;jenkins-hbase4:36881] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:11:15,745 INFO [RS:0;jenkins-hbase4:36881] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:11:15,745 INFO [RS:0;jenkins-hbase4:36881] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:11:15,836 INFO [RS:1;jenkins-hbase4:40573] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40573%2C1690146674972, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,40573,1690146674972, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:11:15,836 WARN [ReadOnlyZKClient-127.0.0.1:59847@0x452f343f] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-23 21:11:15,837 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:15,839 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54576, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:15,839 INFO [RS:2;jenkins-hbase4:45513] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45513%2C1690146675147, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45513,1690146675147, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:11:15,840 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40573] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:54576 deadline: 1690146735839, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:15,847 INFO [RS:0;jenkins-hbase4:36881] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36881%2C1690146674798, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36881,1690146674798, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:11:15,859 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:11:15,860 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:11:15,861 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:11:15,870 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:11:15,870 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:11:15,870 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:11:15,882 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:11:15,882 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:11:15,882 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:11:15,883 INFO [RS:1;jenkins-hbase4:40573] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,40573,1690146674972/jenkins-hbase4.apache.org%2C40573%2C1690146674972.1690146675836 2023-07-23 21:11:15,887 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:15,888 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:11:15,888 DEBUG [RS:1;jenkins-hbase4:40573] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK]] 2023-07-23 21:11:15,894 INFO [RS:2;jenkins-hbase4:45513] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45513,1690146675147/jenkins-hbase4.apache.org%2C45513%2C1690146675147.1690146675840 2023-07-23 21:11:15,899 DEBUG [RS:2;jenkins-hbase4:45513] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK]] 2023-07-23 21:11:15,899 INFO [RS:0;jenkins-hbase4:36881] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36881,1690146674798/jenkins-hbase4.apache.org%2C36881%2C1690146674798.1690146675847 2023-07-23 21:11:15,899 DEBUG [RS:0;jenkins-hbase4:36881] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK], DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK]] 2023-07-23 21:11:15,899 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54592, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:11:15,907 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 21:11:15,907 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:15,909 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40573%2C1690146674972.meta, suffix=.meta, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,40573,1690146674972, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:11:15,927 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:11:15,927 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:11:15,927 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:11:15,932 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,40573,1690146674972/jenkins-hbase4.apache.org%2C40573%2C1690146674972.meta.1690146675910.meta 2023-07-23 21:11:15,934 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK]] 2023-07-23 21:11:15,935 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:15,935 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:11:15,935 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 21:11:15,935 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 21:11:15,935 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 21:11:15,935 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:15,935 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 21:11:15,935 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 21:11:15,937 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 21:11:15,938 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info 2023-07-23 21:11:15,938 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info 2023-07-23 21:11:15,938 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 21:11:15,946 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/0319ae9753684fde96cc22ac6aee2994 2023-07-23 21:11:15,951 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15c38b0ea71c46adb63b62b92a154f8d 2023-07-23 21:11:15,952 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/15c38b0ea71c46adb63b62b92a154f8d 2023-07-23 21:11:15,952 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:15,952 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 21:11:15,953 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:11:15,953 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:11:15,954 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 21:11:15,960 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 27a108a6498540b9881fffee97f83a46 2023-07-23 21:11:15,960 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier/27a108a6498540b9881fffee97f83a46 2023-07-23 21:11:15,961 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:15,961 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 21:11:15,962 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table 2023-07-23 21:11:15,962 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table 2023-07-23 21:11:15,962 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 21:11:15,968 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table/a91c64934a824ba3a000ed314e0f4688 2023-07-23 21:11:15,973 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b405931d76734326ace1ba7e7a4c97d4 2023-07-23 21:11:15,973 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table/b405931d76734326ace1ba7e7a4c97d4 2023-07-23 21:11:15,973 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:15,974 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740 2023-07-23 21:11:15,975 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740 2023-07-23 21:11:15,977 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 21:11:15,979 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 21:11:15,980 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=157; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12039478560, jitterRate=0.12126381695270538}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 21:11:15,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 21:11:15,981 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=126, masterSystemTime=1690146675887 2023-07-23 21:11:15,985 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 21:11:15,985 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 21:11:15,985 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40573,1690146674972, state=OPEN 2023-07-23 21:11:15,987 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 21:11:15,987 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:11:15,989 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=126, resume processing ppid=125 2023-07-23 21:11:15,989 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, ppid=125, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40573,1690146674972 in 267 msec 2023-07-23 21:11:15,991 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-23 21:11:15,991 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 429 msec 2023-07-23 21:11:16,159 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:16,160 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:38927 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:38927 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:11:16,161 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:38927 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:38927 2023-07-23 21:11:16,268 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:38927 this server is in the failed servers list 2023-07-23 21:11:16,474 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:38927 this server is in the failed servers list 2023-07-23 21:11:16,780 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:38927 this server is in the failed servers list 2023-07-23 21:11:17,165 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1609ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1507ms 2023-07-23 21:11:17,250 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(712): Not transferring queue since we are shutting down 2023-07-23 21:11:17,285 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:38927 this server is in the failed servers list 2023-07-23 21:11:18,297 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:38927 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:38927 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:11:18,299 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:38927 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:38927 2023-07-23 21:11:18,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3112ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3010ms 2023-07-23 21:11:20,070 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4514ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-23 21:11:20,070 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 21:11:20,074 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=cfdae6c1dde0d9be1f26f623634660ba, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,42175,1690146662090, regionLocation=jenkins-hbase4.apache.org,42175,1690146662090, openSeqNum=15 2023-07-23 21:11:20,074 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=99f4bb247673f611dc82de993563e38b, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,42175,1690146662090, regionLocation=jenkins-hbase4.apache.org,42175,1690146662090, openSeqNum=2 2023-07-23 21:11:20,074 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,38927,1690146661924, regionLocation=jenkins-hbase4.apache.org,38927,1690146661924, openSeqNum=77 2023-07-23 21:11:20,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 21:11:20,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690146740074 2023-07-23 21:11:20,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690146800075 2023-07-23 21:11:20,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-23 21:11:20,092 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,42175,1690146662090 had 3 regions 2023-07-23 21:11:20,092 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,39795,1690146662307 had 0 regions 2023-07-23 21:11:20,092 INFO [PEWorker-4] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,38927,1690146661924 had 1 regions 2023-07-23 21:11:20,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40555,1690146674547-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:20,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40555,1690146674547-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:20,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40555,1690146674547-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:20,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40555, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:20,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:20,094 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. is NOT online; state={cfdae6c1dde0d9be1f26f623634660ba state=OPEN, ts=1690146680074, server=jenkins-hbase4.apache.org,42175,1690146662090}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-23 21:11:20,095 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=123, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,42175,1690146662090, splitWal=true, meta=true, isMeta: false 2023-07-23 21:11:20,095 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=124, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,39795,1690146662307, splitWal=true, meta=false, isMeta: false 2023-07-23 21:11:20,095 INFO [PEWorker-4] procedure.ServerCrashProcedure(300): Splitting WALs pid=122, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,38927,1690146661924, splitWal=true, meta=false, isMeta: false 2023-07-23 21:11:20,098 WARN [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase4.apache.org,42175,1690146662090/hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba., unknown_server=jenkins-hbase4.apache.org,42175,1690146662090/hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b., unknown_server=jenkins-hbase4.apache.org,38927,1690146661924/hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:20,098 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42175,1690146662090-splitting dir is empty, no logs to split. 2023-07-23 21:11:20,098 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,42175,1690146662090 WAL count=0, meta=false 2023-07-23 21:11:20,098 DEBUG [PEWorker-2] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,39795,1690146662307-splitting 2023-07-23 21:11:20,099 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,39795,1690146662307-splitting dir is empty, no logs to split. 2023-07-23 21:11:20,099 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,39795,1690146662307 WAL count=0, meta=false 2023-07-23 21:11:20,100 DEBUG [PEWorker-4] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,38927,1690146661924-splitting 2023-07-23 21:11:20,101 INFO [PEWorker-4] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,38927,1690146661924-splitting dir is empty, no logs to split. 2023-07-23 21:11:20,101 INFO [PEWorker-4] master.SplitWALManager(106): jenkins-hbase4.apache.org,38927,1690146661924 WAL count=0, meta=false 2023-07-23 21:11:20,102 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,42175,1690146662090-splitting dir is empty, no logs to split. 2023-07-23 21:11:20,102 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,42175,1690146662090 WAL count=0, meta=false 2023-07-23 21:11:20,102 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,42175,1690146662090 WAL splitting is done? wals=0, meta=false 2023-07-23 21:11:20,104 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cfdae6c1dde0d9be1f26f623634660ba, ASSIGN}, {pid=128, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=99f4bb247673f611dc82de993563e38b, ASSIGN}] 2023-07-23 21:11:20,105 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=127, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cfdae6c1dde0d9be1f26f623634660ba, ASSIGN 2023-07-23 21:11:20,106 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=128, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=99f4bb247673f611dc82de993563e38b, ASSIGN 2023-07-23 21:11:20,107 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=127, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=cfdae6c1dde0d9be1f26f623634660ba, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-23 21:11:20,107 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=128, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=99f4bb247673f611dc82de993563e38b, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-23 21:11:20,108 DEBUG [jenkins-hbase4:40555] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 21:11:20,108 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:11:20,108 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:11:20,108 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:11:20,108 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:11:20,108 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-23 21:11:20,111 INFO [PEWorker-4] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,38927,1690146661924-splitting dir is empty, no logs to split. 2023-07-23 21:11:20,111 INFO [PEWorker-4] master.SplitWALManager(106): jenkins-hbase4.apache.org,38927,1690146661924 WAL count=0, meta=false 2023-07-23 21:11:20,111 DEBUG [PEWorker-4] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,38927,1690146661924 WAL splitting is done? wals=0, meta=false 2023-07-23 21:11:20,112 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,39795,1690146662307-splitting dir is empty, no logs to split. 2023-07-23 21:11:20,112 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,39795,1690146662307 WAL count=0, meta=false 2023-07-23 21:11:20,112 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,39795,1690146662307 WAL splitting is done? wals=0, meta=false 2023-07-23 21:11:20,113 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=cfdae6c1dde0d9be1f26f623634660ba, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:20,113 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=99f4bb247673f611dc82de993563e38b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:20,113 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146680113"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146680113"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146680113"}]},"ts":"1690146680113"} 2023-07-23 21:11:20,113 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690146680113"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146680113"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146680113"}]},"ts":"1690146680113"} 2023-07-23 21:11:20,117 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=129, ppid=127, state=RUNNABLE; OpenRegionProcedure cfdae6c1dde0d9be1f26f623634660ba, server=jenkins-hbase4.apache.org,45513,1690146675147}] 2023-07-23 21:11:20,118 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=128, state=RUNNABLE; OpenRegionProcedure 99f4bb247673f611dc82de993563e38b, server=jenkins-hbase4.apache.org,40573,1690146674972}] 2023-07-23 21:11:20,121 INFO [PEWorker-4] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,38927,1690146661924 failed, ignore...File hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,38927,1690146661924-splitting does not exist. 2023-07-23 21:11:20,122 INFO [PEWorker-2] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,39795,1690146662307 failed, ignore...File hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,39795,1690146662307-splitting does not exist. 2023-07-23 21:11:20,122 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=122, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, ASSIGN}] 2023-07-23 21:11:20,123 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=122, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, ASSIGN 2023-07-23 21:11:20,124 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,39795,1690146662307 after splitting done 2023-07-23 21:11:20,124 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=122, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-23 21:11:20,124 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase4.apache.org,39795,1690146662307 from processing; numProcessing=2 2023-07-23 21:11:20,126 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=124, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,39795,1690146662307, splitWal=true, meta=false in 4.6320 sec 2023-07-23 21:11:20,166 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-23 21:11:20,271 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:20,271 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:11:20,272 INFO [RS-EventLoopGroup-16-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37754, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:11:20,274 DEBUG [jenkins-hbase4:40555] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 21:11:20,274 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:20,275 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:11:20,275 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 99f4bb247673f611dc82de993563e38b, NAME => 'hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:20,275 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:11:20,275 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:11:20,275 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:11:20,275 DEBUG [jenkins-hbase4:40555] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:11:20,275 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:20,276 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:20,276 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:20,276 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:20,276 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:20,277 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146680276"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146680276"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146680276"}]},"ts":"1690146680276"} 2023-07-23 21:11:20,279 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:20,279 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=131, state=RUNNABLE; OpenRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,36881,1690146674798}] 2023-07-23 21:11:20,280 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:20,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cfdae6c1dde0d9be1f26f623634660ba, NAME => 'hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:20,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:20,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:20,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:20,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:20,280 DEBUG [StoreOpener-99f4bb247673f611dc82de993563e38b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/q 2023-07-23 21:11:20,280 DEBUG [StoreOpener-99f4bb247673f611dc82de993563e38b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/q 2023-07-23 21:11:20,281 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 99f4bb247673f611dc82de993563e38b columnFamilyName q 2023-07-23 21:11:20,282 INFO [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:20,283 DEBUG [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/info 2023-07-23 21:11:20,283 DEBUG [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/info 2023-07-23 21:11:20,283 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] regionserver.HStore(310): Store=99f4bb247673f611dc82de993563e38b/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:20,283 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:20,283 INFO [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cfdae6c1dde0d9be1f26f623634660ba columnFamilyName info 2023-07-23 21:11:20,284 DEBUG [StoreOpener-99f4bb247673f611dc82de993563e38b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/u 2023-07-23 21:11:20,284 DEBUG [StoreOpener-99f4bb247673f611dc82de993563e38b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/u 2023-07-23 21:11:20,284 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 99f4bb247673f611dc82de993563e38b columnFamilyName u 2023-07-23 21:11:20,285 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] regionserver.HStore(310): Store=99f4bb247673f611dc82de993563e38b/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:20,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:20,288 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:20,291 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-23 21:11:20,293 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:20,297 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 99f4bb247673f611dc82de993563e38b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11175717920, jitterRate=0.040819838643074036}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-23 21:11:20,297 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 99f4bb247673f611dc82de993563e38b: 2023-07-23 21:11:20,298 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 39241fc32c9441b98ec8f405a6015e4c 2023-07-23 21:11:20,298 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b., pid=130, masterSystemTime=1690146680271 2023-07-23 21:11:20,298 DEBUG [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/info/39241fc32c9441b98ec8f405a6015e4c 2023-07-23 21:11:20,299 INFO [StoreOpener-cfdae6c1dde0d9be1f26f623634660ba-1] regionserver.HStore(310): Store=cfdae6c1dde0d9be1f26f623634660ba/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:20,300 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:20,301 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:20,301 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:20,302 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=99f4bb247673f611dc82de993563e38b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:20,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:20,302 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690146680302"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146680302"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146680302"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146680302"}]},"ts":"1690146680302"} 2023-07-23 21:11:20,309 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=128 2023-07-23 21:11:20,309 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=128, state=SUCCESS; OpenRegionProcedure 99f4bb247673f611dc82de993563e38b, server=jenkins-hbase4.apache.org,40573,1690146674972 in 189 msec 2023-07-23 21:11:20,309 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:20,310 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=99f4bb247673f611dc82de993563e38b, ASSIGN in 205 msec 2023-07-23 21:11:20,312 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cfdae6c1dde0d9be1f26f623634660ba; next sequenceid=18; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11928591680, jitterRate=0.11093667149543762}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:11:20,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cfdae6c1dde0d9be1f26f623634660ba: 2023-07-23 21:11:20,313 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba., pid=129, masterSystemTime=1690146680271 2023-07-23 21:11:20,318 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=cfdae6c1dde0d9be1f26f623634660ba, regionState=OPEN, openSeqNum=18, regionLocation=jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:20,318 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146680318"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146680318"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146680318"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146680318"}]},"ts":"1690146680318"} 2023-07-23 21:11:20,318 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:20,319 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:20,320 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:38927 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:38927 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:11:20,321 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:38927 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:38927 2023-07-23 21:11:20,321 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=46, started=4170 ms ago, cancelled=false, msg=Call to address=jenkins-hbase4.apache.org/172.31.14.131:38927 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:38927, details=row '\x00' on table 'hbase:rsgroup' at region=hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183., hostname=jenkins-hbase4.apache.org,38927,1690146661924, seqNum=77, see https://s.apache.org/timeout, exception=java.net.ConnectException: Call to address=jenkins-hbase4.apache.org/172.31.14.131:38927 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:38927 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:186) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:99) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:398) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:368) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:396) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:195) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$300(NettyRpcConnection.java:76) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:296) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:287) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:674) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:693) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:38927 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:11:20,323 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=129, resume processing ppid=127 2023-07-23 21:11:20,323 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=129, ppid=127, state=SUCCESS; OpenRegionProcedure cfdae6c1dde0d9be1f26f623634660ba, server=jenkins-hbase4.apache.org,45513,1690146675147 in 204 msec 2023-07-23 21:11:20,325 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=123 2023-07-23 21:11:20,325 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,42175,1690146662090 after splitting done 2023-07-23 21:11:20,325 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=cfdae6c1dde0d9be1f26f623634660ba, ASSIGN in 219 msec 2023-07-23 21:11:20,325 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase4.apache.org,42175,1690146662090 from processing; numProcessing=1 2023-07-23 21:11:20,326 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,42175,1690146662090, splitWal=true, meta=true in 4.8350 sec 2023-07-23 21:11:20,433 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:20,434 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:11:20,435 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41684, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:11:20,439 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:20,439 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 674d6b4e3c5d6a4f0860e9c874b3e183, NAME => 'hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:20,439 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:11:20,439 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. service=MultiRowMutationService 2023-07-23 21:11:20,440 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 21:11:20,440 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:20,440 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:20,440 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:20,440 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:20,441 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:20,442 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m 2023-07-23 21:11:20,442 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m 2023-07-23 21:11:20,443 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 674d6b4e3c5d6a4f0860e9c874b3e183 columnFamilyName m 2023-07-23 21:11:20,450 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/314ee21ed86d420b8896380bfa6f8703 2023-07-23 21:11:20,456 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/c559075bcb8741e4859507bb7fb7cfc8 2023-07-23 21:11:20,460 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef999fa06b66465f978c7309df40e37f 2023-07-23 21:11:20,461 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/ef999fa06b66465f978c7309df40e37f 2023-07-23 21:11:20,461 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(310): Store=674d6b4e3c5d6a4f0860e9c874b3e183/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:20,462 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:20,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:20,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:20,467 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 674d6b4e3c5d6a4f0860e9c874b3e183; next sequenceid=84; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2c59d5a0, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:11:20,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 674d6b4e3c5d6a4f0860e9c874b3e183: 2023-07-23 21:11:20,468 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183., pid=132, masterSystemTime=1690146680433 2023-07-23 21:11:20,470 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 21:11:20,474 DEBUG [RS:0;jenkins-hbase4:36881-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 21:11:20,475 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:20,476 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:20,476 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=OPEN, openSeqNum=84, regionLocation=jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:20,476 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146680476"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146680476"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146680476"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146680476"}]},"ts":"1690146680476"} 2023-07-23 21:11:20,477 DEBUG [RS:0;jenkins-hbase4:36881-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 16056 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 21:11:20,479 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=131 2023-07-23 21:11:20,479 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=131, state=SUCCESS; OpenRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,36881,1690146674798 in 199 msec 2023-07-23 21:11:20,480 DEBUG [RS:0;jenkins-hbase4:36881-shortCompactions-0] regionserver.HStore(1912): 674d6b4e3c5d6a4f0860e9c874b3e183/m is initiating minor compaction (all files) 2023-07-23 21:11:20,480 INFO [RS:0;jenkins-hbase4:36881-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 674d6b4e3c5d6a4f0860e9c874b3e183/m in hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:20,480 INFO [RS:0;jenkins-hbase4:36881-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/c559075bcb8741e4859507bb7fb7cfc8, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/ef999fa06b66465f978c7309df40e37f, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/314ee21ed86d420b8896380bfa6f8703] into tmpdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/.tmp, totalSize=15.7 K 2023-07-23 21:11:20,481 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=122 2023-07-23 21:11:20,481 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,38927,1690146661924 after splitting done 2023-07-23 21:11:20,481 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=122, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, ASSIGN in 357 msec 2023-07-23 21:11:20,481 DEBUG [RS:0;jenkins-hbase4:36881-shortCompactions-0] compactions.Compactor(207): Compacting c559075bcb8741e4859507bb7fb7cfc8, keycount=3, bloomtype=ROW, size=5.1 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1690146645922 2023-07-23 21:11:20,481 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase4.apache.org,38927,1690146661924 from processing; numProcessing=0 2023-07-23 21:11:20,482 DEBUG [RS:0;jenkins-hbase4:36881-shortCompactions-0] compactions.Compactor(207): Compacting ef999fa06b66465f978c7309df40e37f, keycount=21, bloomtype=ROW, size=5.7 K, encoding=NONE, compression=NONE, seqNum=73, earliestPutTs=1690146658719 2023-07-23 21:11:20,482 DEBUG [RS:0;jenkins-hbase4:36881-shortCompactions-0] compactions.Compactor(207): Compacting 314ee21ed86d420b8896380bfa6f8703, keycount=2, bloomtype=ROW, size=5.0 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1690146671546 2023-07-23 21:11:20,482 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,38927,1690146661924, splitWal=true, meta=false in 4.9950 sec 2023-07-23 21:11:20,512 INFO [RS:0;jenkins-hbase4:36881-shortCompactions-0] throttle.PressureAwareThroughputController(145): 674d6b4e3c5d6a4f0860e9c874b3e183#m#compaction#11 average throughput is 0.08 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 21:11:20,529 DEBUG [RS:0;jenkins-hbase4:36881-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/.tmp/m/2dccac9ad3c64117ac4486d1b2cba9e0 as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/2dccac9ad3c64117ac4486d1b2cba9e0 2023-07-23 21:11:20,548 DEBUG [RS:0;jenkins-hbase4:36881-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-23 21:11:20,550 INFO [RS:0;jenkins-hbase4:36881-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 674d6b4e3c5d6a4f0860e9c874b3e183/m of 674d6b4e3c5d6a4f0860e9c874b3e183 into 2dccac9ad3c64117ac4486d1b2cba9e0(size=5.1 K), total size for store is 5.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 21:11:20,550 DEBUG [RS:0;jenkins-hbase4:36881-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 674d6b4e3c5d6a4f0860e9c874b3e183: 2023-07-23 21:11:20,550 INFO [RS:0;jenkins-hbase4:36881-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183., storeName=674d6b4e3c5d6a4f0860e9c874b3e183/m, priority=13, startTime=1690146680470; duration=0sec 2023-07-23 21:11:20,550 DEBUG [RS:0;jenkins-hbase4:36881-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 21:11:21,094 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-23 21:11:21,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:21,102 INFO [RS-EventLoopGroup-16-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41522, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:21,115 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 21:11:21,117 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 21:11:21,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.796sec 2023-07-23 21:11:21,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-23 21:11:21,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 21:11:21,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 21:11:21,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40555,1690146674547-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 21:11:21,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40555,1690146674547-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 21:11:21,119 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 21:11:21,210 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(139): Connect 0x1bc40d19 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:21,216 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3614e6e3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:21,218 DEBUG [hconnection-0x1c229db-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:21,220 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49886, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:21,225 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-23 21:11:21,225 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1bc40d19 to 127.0.0.1:59847 2023-07-23 21:11:21,225 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:21,226 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase4.apache.org:40555 after: jenkins-hbase4.apache.org:40555 2023-07-23 21:11:21,227 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(139): Connect 0x1ee10034 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:21,231 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@efd6ec2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:21,231 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:21,442 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 21:11:21,678 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-23 21:11:21,678 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-23 21:11:21,679 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-23 21:11:24,336 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:24,337 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41694, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:24,339 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-23 21:11:24,339 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-23 21:11:24,347 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:24,347 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:24,347 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:24,349 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-23 21:11:24,349 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-23 21:11:24,435 DEBUG [Listener at localhost/38995] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 21:11:24,437 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33512, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 21:11:24,439 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-23 21:11:24,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-23 21:11:24,440 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(139): Connect 0x1f342c91 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:24,448 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e104e70, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:24,448 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:11:24,451 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:24,452 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1019405901c0027 connected 2023-07-23 21:11:24,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:24,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:24,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:24,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:24,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:24,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:11:24,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:24,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:11:24,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:24,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:11:24,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:24,466 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-23 21:11:24,501 DEBUG [Finalizer] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x09a1fd58 to 127.0.0.1:59847 2023-07-23 21:11:24,501 DEBUG [Finalizer] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:24,514 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:24,514 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:24,515 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:24,515 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:24,515 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:24,515 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:24,515 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:24,515 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38679 2023-07-23 21:11:24,516 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:11:24,517 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:11:24,517 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:24,518 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:24,519 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38679 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:11:24,524 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:386790x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:24,525 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(162): regionserver:386790x0, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:11:24,526 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38679-0x1019405901c0028 connected 2023-07-23 21:11:24,527 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-23 21:11:24,527 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:24,529 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38679 2023-07-23 21:11:24,529 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38679 2023-07-23 21:11:24,530 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38679 2023-07-23 21:11:24,530 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38679 2023-07-23 21:11:24,530 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38679 2023-07-23 21:11:24,532 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:24,532 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:24,532 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:24,532 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:11:24,532 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:24,533 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:24,533 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:24,533 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 32807 2023-07-23 21:11:24,533 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:24,536 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:24,536 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7f637f0a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:24,536 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:24,536 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@452c2b42{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:24,650 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:24,650 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:24,650 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:24,651 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:11:24,652 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:24,652 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@792095d2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-32807-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6220186615541551105/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:24,654 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@1ee82094{HTTP/1.1, (http/1.1)}{0.0.0.0:32807} 2023-07-23 21:11:24,655 INFO [Listener at localhost/38995] server.Server(415): Started @50455ms 2023-07-23 21:11:24,657 INFO [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:11:24,657 DEBUG [RS:3;jenkins-hbase4:38679] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:11:24,659 DEBUG [RS:3;jenkins-hbase4:38679] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:11:24,659 DEBUG [RS:3;jenkins-hbase4:38679] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:11:24,660 DEBUG [RS:3;jenkins-hbase4:38679] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:11:24,663 DEBUG [RS:3;jenkins-hbase4:38679] zookeeper.ReadOnlyZKClient(139): Connect 0x1efd747d to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:24,666 DEBUG [RS:3;jenkins-hbase4:38679] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@103a89d2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:24,666 DEBUG [RS:3;jenkins-hbase4:38679] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@14051f0d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:24,675 DEBUG [RS:3;jenkins-hbase4:38679] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:38679 2023-07-23 21:11:24,675 INFO [RS:3;jenkins-hbase4:38679] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:11:24,675 INFO [RS:3;jenkins-hbase4:38679] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:11:24,675 DEBUG [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:11:24,675 INFO [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40555,1690146674547 with isa=jenkins-hbase4.apache.org/172.31.14.131:38679, startcode=1690146684514 2023-07-23 21:11:24,675 DEBUG [RS:3;jenkins-hbase4:38679] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:11:24,678 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52237, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.11 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:11:24,678 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40555] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:24,678 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:24,678 DEBUG [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:11:24,678 DEBUG [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:11:24,678 DEBUG [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36633 2023-07-23 21:11:24,681 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:24,681 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:24,681 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:24,681 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:24,681 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:24,681 DEBUG [RS:3;jenkins-hbase4:38679] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:24,681 WARN [RS:3;jenkins-hbase4:38679] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:24,681 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 21:11:24,681 INFO [RS:3;jenkins-hbase4:38679] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:24,681 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:24,682 DEBUG [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:24,682 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:24,682 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38679,1690146684514] 2023-07-23 21:11:24,682 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:24,683 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:24,683 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:24,683 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:24,683 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-23 21:11:24,683 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:24,683 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:24,683 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:24,684 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:24,684 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:24,684 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:24,685 DEBUG [RS:3;jenkins-hbase4:38679] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:24,686 DEBUG [RS:3;jenkins-hbase4:38679] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:24,686 DEBUG [RS:3;jenkins-hbase4:38679] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:24,686 DEBUG [RS:3;jenkins-hbase4:38679] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:24,687 DEBUG [RS:3;jenkins-hbase4:38679] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:11:24,687 INFO [RS:3;jenkins-hbase4:38679] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:11:24,688 INFO [RS:3;jenkins-hbase4:38679] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:11:24,688 INFO [RS:3;jenkins-hbase4:38679] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:11:24,689 INFO [RS:3;jenkins-hbase4:38679] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:24,689 INFO [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:11:24,690 INFO [RS:3;jenkins-hbase4:38679] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:24,690 DEBUG [RS:3;jenkins-hbase4:38679] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:24,690 DEBUG [RS:3;jenkins-hbase4:38679] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:24,690 DEBUG [RS:3;jenkins-hbase4:38679] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:24,690 DEBUG [RS:3;jenkins-hbase4:38679] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:24,690 DEBUG [RS:3;jenkins-hbase4:38679] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:24,690 DEBUG [RS:3;jenkins-hbase4:38679] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:24,690 DEBUG [RS:3;jenkins-hbase4:38679] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:24,690 DEBUG [RS:3;jenkins-hbase4:38679] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:24,690 DEBUG [RS:3;jenkins-hbase4:38679] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:24,690 DEBUG [RS:3;jenkins-hbase4:38679] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:24,691 INFO [RS:3;jenkins-hbase4:38679] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:24,691 INFO [RS:3;jenkins-hbase4:38679] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:24,691 INFO [RS:3;jenkins-hbase4:38679] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:24,702 INFO [RS:3;jenkins-hbase4:38679] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:11:24,702 INFO [RS:3;jenkins-hbase4:38679] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38679,1690146684514-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:24,714 INFO [RS:3;jenkins-hbase4:38679] regionserver.Replication(203): jenkins-hbase4.apache.org,38679,1690146684514 started 2023-07-23 21:11:24,714 INFO [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38679,1690146684514, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38679, sessionid=0x1019405901c0028 2023-07-23 21:11:24,714 DEBUG [RS:3;jenkins-hbase4:38679] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:11:24,714 DEBUG [RS:3;jenkins-hbase4:38679] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:24,714 DEBUG [RS:3;jenkins-hbase4:38679] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38679,1690146684514' 2023-07-23 21:11:24,714 DEBUG [RS:3;jenkins-hbase4:38679] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:11:24,715 DEBUG [RS:3;jenkins-hbase4:38679] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:11:24,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:11:24,715 DEBUG [RS:3;jenkins-hbase4:38679] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:11:24,715 DEBUG [RS:3;jenkins-hbase4:38679] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:11:24,715 DEBUG [RS:3;jenkins-hbase4:38679] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:24,715 DEBUG [RS:3;jenkins-hbase4:38679] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38679,1690146684514' 2023-07-23 21:11:24,715 DEBUG [RS:3;jenkins-hbase4:38679] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:11:24,716 DEBUG [RS:3;jenkins-hbase4:38679] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:11:24,716 DEBUG [RS:3;jenkins-hbase4:38679] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:11:24,716 INFO [RS:3;jenkins-hbase4:38679] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:11:24,716 INFO [RS:3;jenkins-hbase4:38679] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:11:24,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:24,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:24,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:24,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:24,722 DEBUG [hconnection-0x18660f26-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:24,723 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49890, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:24,728 DEBUG [hconnection-0x18660f26-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:24,729 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41702, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:24,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:24,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:24,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40555] to rsgroup master 2023-07-23 21:11:24,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:24,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] ipc.CallRunner(144): callId: 25 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:33512 deadline: 1690147884733, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40555 is either offline or it does not exist. 2023-07-23 21:11:24,734 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor65.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:11:24,735 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:24,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:24,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:24,736 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36881, jenkins-hbase4.apache.org:38679, jenkins-hbase4.apache.org:40573, jenkins-hbase4.apache.org:45513], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:11:24,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:24,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:24,779 INFO [Listener at localhost/38995] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=551 (was 514) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:40555 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-838552584_17 at /127.0.0.1:41506 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741890_1066] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1485033198) connection to localhost/127.0.0.1:32841 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1977390465-1970 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/366059715.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741892_1068, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1485033198) connection to localhost/127.0.0.1:32841 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Session-HouseKeeper-312a26fd-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp371193635-1661 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/366059715.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1977390465-1976 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741894_1070, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:36881-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1325407277-1631-acceptor-0@3315637a-ServerConnector@731c75e{HTTP/1.1, (http/1.1)}{0.0.0.0:36633} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4072c368-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.10@localhost:32841 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741891_1067, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4072c368-metaLookup-shared--pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData-prefix:jenkins-hbase4.apache.org,40555,1690146674547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36881 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x38b5f8c1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/527689089.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1977390465-1977 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1485033198) connection to localhost/127.0.0.1:32841 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_664987722_17 at /127.0.0.1:45086 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741892_1068] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36881 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1335733701_17 at /127.0.0.1:41632 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741891_1067] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x1f342c91-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1335733701_17 at /127.0.0.1:45104 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741894_1070] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp611398628-1695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36881 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1335733701_17 at /127.0.0.1:41652 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741894_1070] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741893_1069, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp367859940-1723 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x09c81b8a-SendThread(127.0.0.1:59847) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x1efd747d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:40555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741894_1070, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:36881-shortCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp371193635-1668 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_664987722_17 at /127.0.0.1:41634 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741892_1068] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741891_1067, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp371193635-1662-acceptor-0@4b6cfcf8-ServerConnector@318fe04f{HTTP/1.1, (http/1.1)}{0.0.0.0:39325} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741894_1070, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914-prefix:jenkins-hbase4.apache.org,36881,1690146674798 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp976381751-1737 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1335733701_17 at /127.0.0.1:45078 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741891_1067] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp371193635-1666 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914-prefix:jenkins-hbase4.apache.org,40573,1690146674972 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:32841 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1977390465-1974 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp367859940-1728 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:38679Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1335733701_17 at /127.0.0.1:41508 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741891_1067] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741893_1069, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18660f26-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x7de674e9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/527689089.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x1ee10034-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x1ee10034 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/527689089.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp611398628-1697 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:32841 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp367859940-1724 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp371193635-1667 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp371193635-1665 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x1efd747d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/527689089.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1485033198) connection to localhost/127.0.0.1:32841 from jenkins.hfs.11 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x452f343f-SendThread(127.0.0.1:59847) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1977390465-1975 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-17-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1325407277-1630 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/366059715.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976381751-1735 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/366059715.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36881 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x18660f26-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-672884af-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741892_1068, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp367859940-1722-acceptor-0@68ebde25-ServerConnector@7c81bac{HTTP/1.1, (http/1.1)}{0.0.0.0:46715} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4072c368-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741890_1066, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1977390465-1972 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp976381751-1732 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/366059715.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741892_1068, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:40573 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36881 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x1f342c91 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/527689089.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1325407277-1633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x452f343f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x4072c368-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914-prefix:jenkins-hbase4.apache.org,45513,1690146675147 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976381751-1736-acceptor-0@46a35b3d-ServerConnector@34f59e01{HTTP/1.1, (http/1.1)}{0.0.0.0:38313} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp367859940-1721 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/366059715.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1325407277-1634 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:40573-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:40573Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp611398628-1691 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/366059715.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-17-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:36881 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp611398628-1696 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x7de674e9-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146675555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x38b5f8c1-SendThread(127.0.0.1:59847) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp611398628-1698 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976381751-1734 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/366059715.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1335733701_17 at /127.0.0.1:41530 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741894_1070] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976381751-1739 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-757339997_17 at /127.0.0.1:41640 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741893_1069] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp367859940-1726 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp611398628-1694 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp976381751-1733 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/366059715.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:32841 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:45513-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:45513 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp371193635-1663 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:32841 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3bc9f0d9-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36881 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:38679 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1325407277-1635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp367859940-1727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-55d1cba7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-838552584_17 at /127.0.0.1:45064 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741890_1066] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x1ee10034-SendThread(127.0.0.1:59847) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1325407277-1636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38577,1690146661744 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741893_1069, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x38b5f8c1-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741891_1067, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x1efd747d-SendThread(127.0.0.1:59847) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1977390465-1973 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x09c81b8a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36881 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:32841 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741890_1066, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp611398628-1692-acceptor-0@5210c7a2-ServerConnector@727c0667{HTTP/1.1, (http/1.1)}{0.0.0.0:39505} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp611398628-1693 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1325407277-1632 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36881 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-757339997_17 at /127.0.0.1:45096 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741893_1069] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1977390465-1971-acceptor-0@43293427-ServerConnector@1ee82094{HTTP/1.1, (http/1.1)}{0.0.0.0:32807} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x09c81b8a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/527689089.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976381751-1738 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1946696265-172.31.14.131-1690146636244:blk_1073741890_1066, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-757339997_17 at /127.0.0.1:41526 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741893_1069] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1325407277-1637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp371193635-1664 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:45513Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:38679-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:36881Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_664987722_17 at /127.0.0.1:41520 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741892_1068] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-252488829_17 at /127.0.0.1:37372 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp367859940-1725 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x452f343f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/527689089.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x7de674e9-SendThread(127.0.0.1:59847) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-838552584_17 at /127.0.0.1:41610 [Receiving block BP-1946696265-172.31.14.131-1690146636244:blk_1073741890_1066] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36881 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59847@0x1f342c91-SendThread(127.0.0.1:59847) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Session-HouseKeeper-68bd72f8-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914-prefix:jenkins-hbase4.apache.org,40573,1690146674972.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146675555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36881 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=840 (was 791) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=420 (was 466), ProcessCount=173 (was 173), AvailableMemoryMB=7866 (was 7892) 2023-07-23 21:11:24,781 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=551 is superior to 500 2023-07-23 21:11:24,799 INFO [Listener at localhost/38995] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=551, OpenFileDescriptor=840, MaxFileDescriptor=60000, SystemLoadAverage=420, ProcessCount=173, AvailableMemoryMB=7866 2023-07-23 21:11:24,799 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=551 is superior to 500 2023-07-23 21:11:24,799 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(132): testClearDeadServers 2023-07-23 21:11:24,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:24,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:24,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:24,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:24,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:24,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:11:24,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:24,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:11:24,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:24,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:11:24,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:24,816 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:11:24,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:11:24,818 INFO [RS:3;jenkins-hbase4:38679] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38679%2C1690146684514, suffix=, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,38679,1690146684514, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:11:24,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:24,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:24,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:24,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:24,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:24,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:24,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40555] to rsgroup master 2023-07-23 21:11:24,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:24,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] ipc.CallRunner(144): callId: 53 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:33512 deadline: 1690147884838, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40555 is either offline or it does not exist. 2023-07-23 21:11:24,840 DEBUG [RS-EventLoopGroup-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:11:24,840 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor65.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:11:24,843 DEBUG [RS-EventLoopGroup-17-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:11:24,844 DEBUG [RS-EventLoopGroup-17-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:11:24,844 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:24,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:24,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:24,845 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36881, jenkins-hbase4.apache.org:38679, jenkins-hbase4.apache.org:40573, jenkins-hbase4.apache.org:45513], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:11:24,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:24,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:24,846 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBasics(214): testClearDeadServers 2023-07-23 21:11:24,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:24,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:24,852 INFO [RS:3;jenkins-hbase4:38679] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,38679,1690146684514/jenkins-hbase4.apache.org%2C38679%2C1690146684514.1690146684819 2023-07-23 21:11:24,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testClearDeadServers_30812797 2023-07-23 21:11:24,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:24,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:24,854 DEBUG [RS:3;jenkins-hbase4:38679] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK], DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK]] 2023-07-23 21:11:24,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_30812797 2023-07-23 21:11:24,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:11:24,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:24,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:24,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:24,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36881, jenkins-hbase4.apache.org:38679, jenkins-hbase4.apache.org:40573] to rsgroup Group_testClearDeadServers_30812797 2023-07-23 21:11:24,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:24,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:24,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_30812797 2023-07-23 21:11:24,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:11:24,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(238): Moving server region 674d6b4e3c5d6a4f0860e9c874b3e183, which do not belong to RSGroup Group_testClearDeadServers_30812797 2023-07-23 21:11:24,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] procedure2.ProcedureExecutor(1029): Stored pid=133, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, REOPEN/MOVE 2023-07-23 21:11:24,869 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, REOPEN/MOVE 2023-07-23 21:11:24,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(238): Moving server region 99f4bb247673f611dc82de993563e38b, which do not belong to RSGroup Group_testClearDeadServers_30812797 2023-07-23 21:11:24,869 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:24,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] procedure2.ProcedureExecutor(1029): Stored pid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:quota, region=99f4bb247673f611dc82de993563e38b, REOPEN/MOVE 2023-07-23 21:11:24,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testClearDeadServers_30812797 2023-07-23 21:11:24,870 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:quota, region=99f4bb247673f611dc82de993563e38b, REOPEN/MOVE 2023-07-23 21:11:24,870 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146684869"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146684869"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146684869"}]},"ts":"1690146684869"} 2023-07-23 21:11:24,870 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=99f4bb247673f611dc82de993563e38b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:24,871 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690146684870"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146684870"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146684870"}]},"ts":"1690146684870"} 2023-07-23 21:11:24,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-23 21:11:24,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-23 21:11:24,871 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-23 21:11:24,871 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=133, state=RUNNABLE; CloseRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,36881,1690146674798}] 2023-07-23 21:11:24,871 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40573,1690146674972, state=CLOSING 2023-07-23 21:11:24,872 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=134, state=RUNNABLE; CloseRegionProcedure 99f4bb247673f611dc82de993563e38b, server=jenkins-hbase4.apache.org,40573,1690146674972}] 2023-07-23 21:11:24,873 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 21:11:24,873 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:11:24,876 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=135, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40573,1690146674972}] 2023-07-23 21:11:24,876 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=137, ppid=134, state=RUNNABLE; CloseRegionProcedure 99f4bb247673f611dc82de993563e38b, server=jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:25,027 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:25,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 674d6b4e3c5d6a4f0860e9c874b3e183, disabling compactions & flushes 2023-07-23 21:11:25,027 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:25,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:25,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. after waiting 0 ms 2023-07-23 21:11:25,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:25,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 674d6b4e3c5d6a4f0860e9c874b3e183 1/1 column families, dataSize=2.21 KB heapSize=3.71 KB 2023-07-23 21:11:25,029 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-23 21:11:25,030 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 21:11:25,030 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 21:11:25,030 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 21:11:25,030 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 21:11:25,030 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 21:11:25,031 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.46 KB heapSize=6.41 KB 2023-07-23 21:11:25,045 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.21 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/.tmp/m/06a38f371d094f5095ff12f34605e845 2023-07-23 21:11:25,045 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.46 KB at sequenceid=167 (bloomFilter=false), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/info/7ca6f0427221406095057d8a1c4eb7ba 2023-07-23 21:11:25,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 06a38f371d094f5095ff12f34605e845 2023-07-23 21:11:25,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/.tmp/m/06a38f371d094f5095ff12f34605e845 as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/06a38f371d094f5095ff12f34605e845 2023-07-23 21:11:25,052 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/info/7ca6f0427221406095057d8a1c4eb7ba as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/7ca6f0427221406095057d8a1c4eb7ba 2023-07-23 21:11:25,057 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/7ca6f0427221406095057d8a1c4eb7ba, entries=30, sequenceid=167, filesize=8.2 K 2023-07-23 21:11:25,057 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 06a38f371d094f5095ff12f34605e845 2023-07-23 21:11:25,058 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/06a38f371d094f5095ff12f34605e845, entries=5, sequenceid=95, filesize=5.3 K 2023-07-23 21:11:25,058 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.21 KB/2268, heapSize ~3.70 KB/3784, currentSize=0 B/0 for 674d6b4e3c5d6a4f0860e9c874b3e183 in 31ms, sequenceid=95, compaction requested=false 2023-07-23 21:11:25,058 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.46 KB/3538, heapSize ~5.90 KB/6040, currentSize=0 B/0 for 1588230740 in 28ms, sequenceid=167, compaction requested=true 2023-07-23 21:11:25,068 DEBUG [StoreCloser-hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/c559075bcb8741e4859507bb7fb7cfc8, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/ef999fa06b66465f978c7309df40e37f, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/314ee21ed86d420b8896380bfa6f8703] to archive 2023-07-23 21:11:25,069 DEBUG [StoreCloser-hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-23 21:11:25,071 DEBUG [StoreCloser-hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/c559075bcb8741e4859507bb7fb7cfc8 to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/c559075bcb8741e4859507bb7fb7cfc8 2023-07-23 21:11:25,073 DEBUG [StoreCloser-hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/ef999fa06b66465f978c7309df40e37f to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/ef999fa06b66465f978c7309df40e37f 2023-07-23 21:11:25,074 DEBUG [StoreCloser-hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/314ee21ed86d420b8896380bfa6f8703 to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/314ee21ed86d420b8896380bfa6f8703 2023-07-23 21:11:25,077 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/recovered.edits/170.seqid, newMaxSeqId=170, maxSeqId=156 2023-07-23 21:11:25,077 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:11:25,078 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 21:11:25,078 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 21:11:25,079 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,45513,1690146675147 record at close sequenceid=167 2023-07-23 21:11:25,080 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-23 21:11:25,081 WARN [PEWorker-5] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-23 21:11:25,083 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=135 2023-07-23 21:11:25,083 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40573,1690146674972 in 208 msec 2023-07-23 21:11:25,083 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45513,1690146675147; forceNewPlan=false, retain=false 2023-07-23 21:11:25,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=83 2023-07-23 21:11:25,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:11:25,104 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:25,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 674d6b4e3c5d6a4f0860e9c874b3e183: 2023-07-23 21:11:25,104 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 674d6b4e3c5d6a4f0860e9c874b3e183 move to jenkins-hbase4.apache.org,45513,1690146675147 record at close sequenceid=95 2023-07-23 21:11:25,105 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=136, ppid=133, state=RUNNABLE; CloseRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:25,105 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:25,234 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45513,1690146675147, state=OPENING 2023-07-23 21:11:25,235 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 21:11:25,235 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=135, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45513,1690146675147}] 2023-07-23 21:11:25,235 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:11:25,391 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 21:11:25,391 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:25,392 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45513%2C1690146675147.meta, suffix=.meta, logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45513,1690146675147, archiveDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs, maxLogs=32 2023-07-23 21:11:25,409 DEBUG [RS-EventLoopGroup-17-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK] 2023-07-23 21:11:25,409 DEBUG [RS-EventLoopGroup-17-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK] 2023-07-23 21:11:25,409 DEBUG [RS-EventLoopGroup-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK] 2023-07-23 21:11:25,411 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,45513,1690146675147/jenkins-hbase4.apache.org%2C45513%2C1690146675147.meta.1690146685392.meta 2023-07-23 21:11:25,412 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39257,DS-9cccb944-77e5-4b0a-929d-b38957409f93,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-6c151b7c-b95d-426a-9e2b-4f02874248ad,DISK], DatanodeInfoWithStorage[127.0.0.1:33589,DS-aa8d0171-132d-4da2-b07d-13febd9cf809,DISK]] 2023-07-23 21:11:25,412 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:25,412 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:11:25,412 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 21:11:25,412 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 21:11:25,412 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 21:11:25,412 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:25,413 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 21:11:25,413 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 21:11:25,414 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 21:11:25,415 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info 2023-07-23 21:11:25,415 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info 2023-07-23 21:11:25,415 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 21:11:25,424 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/0319ae9753684fde96cc22ac6aee2994 2023-07-23 21:11:25,429 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15c38b0ea71c46adb63b62b92a154f8d 2023-07-23 21:11:25,429 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/15c38b0ea71c46adb63b62b92a154f8d 2023-07-23 21:11:25,433 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/7ca6f0427221406095057d8a1c4eb7ba 2023-07-23 21:11:25,433 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:25,434 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 21:11:25,434 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:11:25,434 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:11:25,435 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 21:11:25,440 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 27a108a6498540b9881fffee97f83a46 2023-07-23 21:11:25,440 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/rep_barrier/27a108a6498540b9881fffee97f83a46 2023-07-23 21:11:25,440 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:25,440 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 21:11:25,441 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table 2023-07-23 21:11:25,441 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table 2023-07-23 21:11:25,441 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 21:11:25,447 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table/a91c64934a824ba3a000ed314e0f4688 2023-07-23 21:11:25,450 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b405931d76734326ace1ba7e7a4c97d4 2023-07-23 21:11:25,451 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/table/b405931d76734326ace1ba7e7a4c97d4 2023-07-23 21:11:25,451 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:25,451 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740 2023-07-23 21:11:25,452 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740 2023-07-23 21:11:25,455 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 21:11:25,456 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 21:11:25,456 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=171; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9755769760, jitterRate=-0.0914231389760971}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 21:11:25,457 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 21:11:25,457 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=139, masterSystemTime=1690146685387 2023-07-23 21:11:25,458 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 21:11:25,459 DEBUG [RS:2;jenkins-hbase4:45513-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 21:11:25,461 DEBUG [RS:2;jenkins-hbase4:45513-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 27652 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 21:11:25,461 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 21:11:25,461 DEBUG [RS:2;jenkins-hbase4:45513-shortCompactions-0] regionserver.HStore(1912): 1588230740/info is initiating minor compaction (all files) 2023-07-23 21:11:25,461 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 21:11:25,461 INFO [RS:2;jenkins-hbase4:45513-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/info in hbase:meta,,1.1588230740 2023-07-23 21:11:25,461 INFO [RS:2;jenkins-hbase4:45513-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/15c38b0ea71c46adb63b62b92a154f8d, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/0319ae9753684fde96cc22ac6aee2994, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/7ca6f0427221406095057d8a1c4eb7ba] into tmpdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp, totalSize=27.0 K 2023-07-23 21:11:25,461 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45513,1690146675147, state=OPEN 2023-07-23 21:11:25,462 DEBUG [RS:2;jenkins-hbase4:45513-shortCompactions-0] compactions.Compactor(207): Compacting 15c38b0ea71c46adb63b62b92a154f8d, keycount=57, bloomtype=NONE, size=11.1 K, encoding=NONE, compression=NONE, seqNum=138, earliestPutTs=1690146644754 2023-07-23 21:11:25,462 DEBUG [RS:2;jenkins-hbase4:45513-shortCompactions-0] compactions.Compactor(207): Compacting 0319ae9753684fde96cc22ac6aee2994, keycount=26, bloomtype=NONE, size=7.7 K, encoding=NONE, compression=NONE, seqNum=153, earliestPutTs=1690146667304 2023-07-23 21:11:25,463 DEBUG [RS:2;jenkins-hbase4:45513-shortCompactions-0] compactions.Compactor(207): Compacting 7ca6f0427221406095057d8a1c4eb7ba, keycount=30, bloomtype=NONE, size=8.2 K, encoding=NONE, compression=NONE, seqNum=167, earliestPutTs=1690146680113 2023-07-23 21:11:25,465 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 21:11:25,465 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:11:25,466 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=CLOSED 2023-07-23 21:11:25,466 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146685466"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146685466"}]},"ts":"1690146685466"} 2023-07-23 21:11:25,467 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40573] ipc.CallRunner(144): callId: 64 service: ClientService methodName: Mutate size: 213 connection: 172.31.14.131:54576 deadline: 1690146745467, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45513 startCode=1690146675147. As of locationSeqNum=167. 2023-07-23 21:11:25,469 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=135 2023-07-23 21:11:25,469 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45513,1690146675147 in 230 msec 2023-07-23 21:11:25,470 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 599 msec 2023-07-23 21:11:25,477 INFO [RS:2;jenkins-hbase4:45513-shortCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#info#compaction#14 average throughput is 5.74 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 21:11:25,491 DEBUG [RS:2;jenkins-hbase4:45513-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/info/55db03650d0b496397008380400e7aca as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/55db03650d0b496397008380400e7aca 2023-07-23 21:11:25,498 INFO [RS:2;jenkins-hbase4:45513-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/info of 1588230740 into 55db03650d0b496397008380400e7aca(size=10.7 K), total size for store is 10.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 21:11:25,498 DEBUG [RS:2;jenkins-hbase4:45513-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-23 21:11:25,498 INFO [RS:2;jenkins-hbase4:45513-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/info, priority=13, startTime=1690146685457; duration=0sec 2023-07-23 21:11:25,498 DEBUG [RS:2;jenkins-hbase4:45513-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 21:11:25,572 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=133 2023-07-23 21:11:25,572 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=133, state=SUCCESS; CloseRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,36881,1690146674798 in 699 msec 2023-07-23 21:11:25,573 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=133, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45513,1690146675147; forceNewPlan=false, retain=false 2023-07-23 21:11:25,617 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:25,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 99f4bb247673f611dc82de993563e38b, disabling compactions & flushes 2023-07-23 21:11:25,619 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:25,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:25,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. after waiting 0 ms 2023-07-23 21:11:25,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:25,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 21:11:25,624 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:25,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 99f4bb247673f611dc82de993563e38b: 2023-07-23 21:11:25,624 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 99f4bb247673f611dc82de993563e38b move to jenkins-hbase4.apache.org,45513,1690146675147 record at close sequenceid=5 2023-07-23 21:11:25,625 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:25,626 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=99f4bb247673f611dc82de993563e38b, regionState=CLOSED 2023-07-23 21:11:25,626 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690146685626"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146685626"}]},"ts":"1690146685626"} 2023-07-23 21:11:25,629 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=134 2023-07-23 21:11:25,629 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=134, state=SUCCESS; CloseRegionProcedure 99f4bb247673f611dc82de993563e38b, server=jenkins-hbase4.apache.org,40573,1690146674972 in 755 msec 2023-07-23 21:11:25,629 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=134, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=99f4bb247673f611dc82de993563e38b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45513,1690146675147; forceNewPlan=false, retain=false 2023-07-23 21:11:25,630 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:25,630 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146685629"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146685629"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146685629"}]},"ts":"1690146685629"} 2023-07-23 21:11:25,630 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=99f4bb247673f611dc82de993563e38b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:25,630 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690146685630"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146685630"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146685630"}]},"ts":"1690146685630"} 2023-07-23 21:11:25,631 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=133, state=RUNNABLE; OpenRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,45513,1690146675147}] 2023-07-23 21:11:25,632 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=134, state=RUNNABLE; OpenRegionProcedure 99f4bb247673f611dc82de993563e38b, server=jenkins-hbase4.apache.org,45513,1690146675147}] 2023-07-23 21:11:25,786 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:25,787 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 674d6b4e3c5d6a4f0860e9c874b3e183, NAME => 'hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:25,787 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:11:25,787 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. service=MultiRowMutationService 2023-07-23 21:11:25,787 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 21:11:25,787 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:25,787 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:25,787 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:25,787 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:25,788 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:25,789 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m 2023-07-23 21:11:25,789 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m 2023-07-23 21:11:25,790 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 674d6b4e3c5d6a4f0860e9c874b3e183 columnFamilyName m 2023-07-23 21:11:25,796 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 06a38f371d094f5095ff12f34605e845 2023-07-23 21:11:25,796 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/06a38f371d094f5095ff12f34605e845 2023-07-23 21:11:25,800 DEBUG [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(539): loaded hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/2dccac9ad3c64117ac4486d1b2cba9e0 2023-07-23 21:11:25,800 INFO [StoreOpener-674d6b4e3c5d6a4f0860e9c874b3e183-1] regionserver.HStore(310): Store=674d6b4e3c5d6a4f0860e9c874b3e183/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:25,801 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:25,802 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:25,804 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:25,805 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 674d6b4e3c5d6a4f0860e9c874b3e183; next sequenceid=99; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@24763f55, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:11:25,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 674d6b4e3c5d6a4f0860e9c874b3e183: 2023-07-23 21:11:25,806 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183., pid=140, masterSystemTime=1690146685783 2023-07-23 21:11:25,809 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:25,809 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:25,809 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:25,809 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 99f4bb247673f611dc82de993563e38b, NAME => 'hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:25,809 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:25,809 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=674d6b4e3c5d6a4f0860e9c874b3e183, regionState=OPEN, openSeqNum=99, regionLocation=jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:25,809 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:25,809 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:25,809 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146685809"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146685809"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146685809"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146685809"}]},"ts":"1690146685809"} 2023-07-23 21:11:25,809 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:25,811 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:25,812 DEBUG [StoreOpener-99f4bb247673f611dc82de993563e38b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/q 2023-07-23 21:11:25,812 DEBUG [StoreOpener-99f4bb247673f611dc82de993563e38b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/q 2023-07-23 21:11:25,813 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 99f4bb247673f611dc82de993563e38b columnFamilyName q 2023-07-23 21:11:25,813 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] regionserver.HStore(310): Store=99f4bb247673f611dc82de993563e38b/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:25,813 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:25,814 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=133 2023-07-23 21:11:25,814 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=133, state=SUCCESS; OpenRegionProcedure 674d6b4e3c5d6a4f0860e9c874b3e183, server=jenkins-hbase4.apache.org,45513,1690146675147 in 180 msec 2023-07-23 21:11:25,815 DEBUG [StoreOpener-99f4bb247673f611dc82de993563e38b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/u 2023-07-23 21:11:25,815 DEBUG [StoreOpener-99f4bb247673f611dc82de993563e38b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/u 2023-07-23 21:11:25,815 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 99f4bb247673f611dc82de993563e38b columnFamilyName u 2023-07-23 21:11:25,815 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=133, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=674d6b4e3c5d6a4f0860e9c874b3e183, REOPEN/MOVE in 946 msec 2023-07-23 21:11:25,816 INFO [StoreOpener-99f4bb247673f611dc82de993563e38b-1] regionserver.HStore(310): Store=99f4bb247673f611dc82de993563e38b/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:25,817 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:25,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:25,820 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-23 21:11:25,822 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:25,823 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 99f4bb247673f611dc82de993563e38b; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11849347360, jitterRate=0.10355646908283234}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-23 21:11:25,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 99f4bb247673f611dc82de993563e38b: 2023-07-23 21:11:25,826 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b., pid=141, masterSystemTime=1690146685783 2023-07-23 21:11:25,834 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:25,834 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:25,834 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=99f4bb247673f611dc82de993563e38b, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:25,834 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690146685834"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146685834"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146685834"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146685834"}]},"ts":"1690146685834"} 2023-07-23 21:11:25,837 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=134 2023-07-23 21:11:25,837 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=134, state=SUCCESS; OpenRegionProcedure 99f4bb247673f611dc82de993563e38b, server=jenkins-hbase4.apache.org,45513,1690146675147 in 204 msec 2023-07-23 21:11:25,838 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=134, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=99f4bb247673f611dc82de993563e38b, REOPEN/MOVE in 968 msec 2023-07-23 21:11:25,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] procedure.ProcedureSyncWait(216): waitFor pid=133 2023-07-23 21:11:25,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36881,1690146674798, jenkins-hbase4.apache.org,38679,1690146684514, jenkins-hbase4.apache.org,40573,1690146674972] are moved back to default 2023-07-23 21:11:25,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testClearDeadServers_30812797 2023-07-23 21:11:25,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:25,873 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36881] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:41702 deadline: 1690146745873, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45513 startCode=1690146675147. As of locationSeqNum=95. 2023-07-23 21:11:25,983 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40573] ipc.CallRunner(144): callId: 5 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:49890 deadline: 1690146745983, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45513 startCode=1690146675147. As of locationSeqNum=167. 2023-07-23 21:11:26,085 DEBUG [hconnection-0x18660f26-shared-pool-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:26,087 INFO [RS-EventLoopGroup-16-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41528, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:26,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:26,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:26,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testClearDeadServers_30812797 2023-07-23 21:11:26,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:26,104 DEBUG [Listener at localhost/38995] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:11:26,105 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41708, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:11:26,105 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36881] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36881,1690146674798' ***** 2023-07-23 21:11:26,105 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36881] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x788691d2 2023-07-23 21:11:26,105 INFO [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:26,107 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:26,108 INFO [RS:0;jenkins-hbase4:36881] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@dd9df98{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:26,109 INFO [RS:0;jenkins-hbase4:36881] server.AbstractConnector(383): Stopped ServerConnector@318fe04f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:26,109 INFO [RS:0;jenkins-hbase4:36881] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:26,109 INFO [RS:0;jenkins-hbase4:36881] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@333fd2bd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:26,110 INFO [RS:0;jenkins-hbase4:36881] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@54ad8272{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:26,111 INFO [RS:0;jenkins-hbase4:36881] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:11:26,111 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:11:26,111 INFO [RS:0;jenkins-hbase4:36881] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:11:26,111 INFO [RS:0;jenkins-hbase4:36881] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:11:26,111 INFO [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:26,111 DEBUG [RS:0;jenkins-hbase4:36881] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x09c81b8a to 127.0.0.1:59847 2023-07-23 21:11:26,111 DEBUG [RS:0;jenkins-hbase4:36881] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,111 INFO [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36881,1690146674798; all regions closed. 2023-07-23 21:11:26,114 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36881,1690146674798/jenkins-hbase4.apache.org%2C36881%2C1690146674798.1690146675847 not finished, retry = 0 2023-07-23 21:11:26,125 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:26,217 DEBUG [RS:0;jenkins-hbase4:36881] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:11:26,217 INFO [RS:0;jenkins-hbase4:36881] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36881%2C1690146674798:(num 1690146675847) 2023-07-23 21:11:26,217 DEBUG [RS:0;jenkins-hbase4:36881] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,217 INFO [RS:0;jenkins-hbase4:36881] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:26,217 INFO [RS:0;jenkins-hbase4:36881] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:26,217 INFO [RS:0;jenkins-hbase4:36881] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:11:26,217 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:26,217 INFO [RS:0;jenkins-hbase4:36881] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:11:26,218 INFO [RS:0;jenkins-hbase4:36881] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:11:26,219 INFO [RS:0;jenkins-hbase4:36881] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36881 2023-07-23 21:11:26,221 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:26,221 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:26,221 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,221 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:26,221 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 2023-07-23 21:11:26,221 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,221 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,221 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,221 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,223 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:26,223 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:26,223 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36881,1690146674798] 2023-07-23 21:11:26,223 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36881,1690146674798; numProcessing=1 2023-07-23 21:11:26,223 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:26,223 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,223 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,226 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,226 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,226 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 znode expired, triggering replicatorRemoved event 2023-07-23 21:11:26,226 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,226 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36881,1690146674798 already deleted, retry=false 2023-07-23 21:11:26,226 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 znode expired, triggering replicatorRemoved event 2023-07-23 21:11:26,226 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,36881,1690146674798 on jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:26,226 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,226 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,36881,1690146674798 znode expired, triggering replicatorRemoved event 2023-07-23 21:11:26,227 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:26,227 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:26,227 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:26,227 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,228 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,228 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,228 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=142, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,36881,1690146674798, splitWal=true, meta=false 2023-07-23 21:11:26,228 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,228 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=142 for jenkins-hbase4.apache.org,36881,1690146674798 (carryingMeta=false) jenkins-hbase4.apache.org,36881,1690146674798/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@5648724d[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-23 21:11:26,229 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:26,229 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,229 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,230 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:36881 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:36881 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:11:26,230 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=142, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,36881,1690146674798, splitWal=true, meta=false 2023-07-23 21:11:26,233 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:36881 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:36881 2023-07-23 21:11:26,234 INFO [PEWorker-1] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,36881,1690146674798 had 0 regions 2023-07-23 21:11:26,235 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=142, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,36881,1690146674798, splitWal=true, meta=false, isMeta: false 2023-07-23 21:11:26,236 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36881,1690146674798-splitting 2023-07-23 21:11:26,237 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36881,1690146674798-splitting dir is empty, no logs to split. 2023-07-23 21:11:26,237 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,36881,1690146674798 WAL count=0, meta=false 2023-07-23 21:11:26,239 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36881,1690146674798-splitting dir is empty, no logs to split. 2023-07-23 21:11:26,239 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,36881,1690146674798 WAL count=0, meta=false 2023-07-23 21:11:26,239 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,36881,1690146674798 WAL splitting is done? wals=0, meta=false 2023-07-23 21:11:26,241 INFO [PEWorker-1] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,36881,1690146674798 failed, ignore...File hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,36881,1690146674798-splitting does not exist. 2023-07-23 21:11:26,242 INFO [PEWorker-1] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,36881,1690146674798 after splitting done 2023-07-23 21:11:26,242 DEBUG [PEWorker-1] master.DeadServer(114): Removed jenkins-hbase4.apache.org,36881,1690146674798 from processing; numProcessing=0 2023-07-23 21:11:26,247 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=142, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,36881,1690146674798, splitWal=true, meta=false in 16 msec 2023-07-23 21:11:26,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(2362): Client=jenkins//172.31.14.131 clear dead region servers. 2023-07-23 21:11:26,323 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:26,323 INFO [RS:0;jenkins-hbase4:36881] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36881,1690146674798; zookeeper connection closed. 2023-07-23 21:11:26,323 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:36881-0x1019405901c001d, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:26,324 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6c6a9c76] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6c6a9c76 2023-07-23 21:11:26,340 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:26,341 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:26,341 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_30812797 2023-07-23 21:11:26,342 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:11:26,343 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 21:11:26,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:26,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:26,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_30812797 2023-07-23 21:11:26,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:11:26,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(609): Remove decommissioned servers [jenkins-hbase4.apache.org:36881] from RSGroup done 2023-07-23 21:11:26,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testClearDeadServers_30812797 2023-07-23 21:11:26,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:26,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:26,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:26,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:26,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:26,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:26,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:11:26,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:26,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:11:26,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:26,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_30812797 2023-07-23 21:11:26,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 21:11:26,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:26,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:26,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:26,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:26,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38679, jenkins-hbase4.apache.org:40573] to rsgroup default 2023-07-23 21:11:26,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:26,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_30812797 2023-07-23 21:11:26,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:26,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testClearDeadServers_30812797, current retry=0 2023-07-23 21:11:26,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38679,1690146684514, jenkins-hbase4.apache.org,40573,1690146674972] are moved back to Group_testClearDeadServers_30812797 2023-07-23 21:11:26,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testClearDeadServers_30812797 => default 2023-07-23 21:11:26,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:26,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testClearDeadServers_30812797 2023-07-23 21:11:26,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:26,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:11:26,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:26,379 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-23 21:11:26,391 INFO [Listener at localhost/38995] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:26,391 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:26,391 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:26,391 INFO [Listener at localhost/38995] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:26,391 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:26,391 INFO [Listener at localhost/38995] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:26,391 INFO [Listener at localhost/38995] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:26,392 INFO [Listener at localhost/38995] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34137 2023-07-23 21:11:26,392 INFO [Listener at localhost/38995] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:11:26,394 DEBUG [Listener at localhost/38995] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:11:26,395 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:26,395 INFO [Listener at localhost/38995] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:26,396 INFO [Listener at localhost/38995] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34137 connecting to ZooKeeper ensemble=127.0.0.1:59847 2023-07-23 21:11:26,399 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:341370x0, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:26,400 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34137-0x1019405901c002a connected 2023-07-23 21:11:26,400 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(162): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:11:26,401 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(162): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-23 21:11:26,402 DEBUG [Listener at localhost/38995] zookeeper.ZKUtil(164): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:26,402 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34137 2023-07-23 21:11:26,402 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34137 2023-07-23 21:11:26,403 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34137 2023-07-23 21:11:26,403 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34137 2023-07-23 21:11:26,403 DEBUG [Listener at localhost/38995] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34137 2023-07-23 21:11:26,405 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:26,406 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:26,406 INFO [Listener at localhost/38995] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:26,407 INFO [Listener at localhost/38995] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:11:26,407 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:26,407 INFO [Listener at localhost/38995] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:26,407 INFO [Listener at localhost/38995] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:26,407 INFO [Listener at localhost/38995] http.HttpServer(1146): Jetty bound to port 33255 2023-07-23 21:11:26,407 INFO [Listener at localhost/38995] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:26,410 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:26,411 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7fe92a34{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:26,411 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:26,411 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@23b845ee{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:26,526 INFO [Listener at localhost/38995] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:26,527 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:26,528 INFO [Listener at localhost/38995] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:26,528 INFO [Listener at localhost/38995] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:11:26,529 INFO [Listener at localhost/38995] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:26,530 INFO [Listener at localhost/38995] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3a1e22cf{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/java.io.tmpdir/jetty-0_0_0_0-33255-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7601816507393269609/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:26,532 INFO [Listener at localhost/38995] server.AbstractConnector(333): Started ServerConnector@7602d688{HTTP/1.1, (http/1.1)}{0.0.0.0:33255} 2023-07-23 21:11:26,532 INFO [Listener at localhost/38995] server.Server(415): Started @52332ms 2023-07-23 21:11:26,535 INFO [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer(951): ClusterId : af283b81-9f55-4ee2-9fb9-1bc2cdf9cea4 2023-07-23 21:11:26,535 DEBUG [RS:4;jenkins-hbase4:34137] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:11:26,538 DEBUG [RS:4;jenkins-hbase4:34137] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:11:26,538 DEBUG [RS:4;jenkins-hbase4:34137] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:11:26,539 DEBUG [RS:4;jenkins-hbase4:34137] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:11:26,540 DEBUG [RS:4;jenkins-hbase4:34137] zookeeper.ReadOnlyZKClient(139): Connect 0x6ddf0ea2 to 127.0.0.1:59847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:26,543 DEBUG [RS:4;jenkins-hbase4:34137] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6318fd69, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:26,544 DEBUG [RS:4;jenkins-hbase4:34137] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@672765d2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:26,552 DEBUG [RS:4;jenkins-hbase4:34137] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase4:34137 2023-07-23 21:11:26,552 INFO [RS:4;jenkins-hbase4:34137] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:11:26,552 INFO [RS:4;jenkins-hbase4:34137] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:11:26,552 DEBUG [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:11:26,552 INFO [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40555,1690146674547 with isa=jenkins-hbase4.apache.org/172.31.14.131:34137, startcode=1690146686390 2023-07-23 21:11:26,553 DEBUG [RS:4;jenkins-hbase4:34137] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:11:26,554 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60185, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.12 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:11:26,555 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40555] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,555 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:26,555 DEBUG [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914 2023-07-23 21:11:26,555 DEBUG [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32841 2023-07-23 21:11:26,555 DEBUG [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36633 2023-07-23 21:11:26,557 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,557 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,557 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,557 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,557 DEBUG [RS:4;jenkins-hbase4:34137] zookeeper.ZKUtil(162): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,557 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:26,557 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34137,1690146686390] 2023-07-23 21:11:26,557 WARN [RS:4;jenkins-hbase4:34137] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:26,557 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:26,557 INFO [RS:4;jenkins-hbase4:34137] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:26,558 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:26,558 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 21:11:26,557 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:26,558 DEBUG [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/WALs/jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,558 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,558 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,560 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40555,1690146674547] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-23 21:11:26,560 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,560 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,560 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,561 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,561 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,561 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,562 DEBUG [RS:4;jenkins-hbase4:34137] zookeeper.ZKUtil(162): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:26,562 DEBUG [RS:4;jenkins-hbase4:34137] zookeeper.ZKUtil(162): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,562 DEBUG [RS:4;jenkins-hbase4:34137] zookeeper.ZKUtil(162): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,562 DEBUG [RS:4;jenkins-hbase4:34137] zookeeper.ZKUtil(162): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,563 DEBUG [RS:4;jenkins-hbase4:34137] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:11:26,563 INFO [RS:4;jenkins-hbase4:34137] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:11:26,566 INFO [RS:4;jenkins-hbase4:34137] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:11:26,566 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,568 INFO [RS:4;jenkins-hbase4:34137] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:11:26,568 INFO [RS:4;jenkins-hbase4:34137] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:26,568 INFO [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:11:26,569 INFO [RS:4;jenkins-hbase4:34137] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:26,573 DEBUG [RS:4;jenkins-hbase4:34137] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:26,573 DEBUG [RS:4;jenkins-hbase4:34137] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:26,573 DEBUG [RS:4;jenkins-hbase4:34137] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:26,573 DEBUG [RS:4;jenkins-hbase4:34137] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:26,573 DEBUG [RS:4;jenkins-hbase4:34137] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:26,573 DEBUG [RS:4;jenkins-hbase4:34137] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:26,573 DEBUG [RS:4;jenkins-hbase4:34137] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:26,573 DEBUG [RS:4;jenkins-hbase4:34137] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:26,573 DEBUG [RS:4;jenkins-hbase4:34137] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:26,573 DEBUG [RS:4;jenkins-hbase4:34137] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:26,574 INFO [RS:4;jenkins-hbase4:34137] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:26,574 INFO [RS:4;jenkins-hbase4:34137] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:26,574 INFO [RS:4;jenkins-hbase4:34137] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:26,585 INFO [RS:4;jenkins-hbase4:34137] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:11:26,585 INFO [RS:4;jenkins-hbase4:34137] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34137,1690146686390-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:26,596 INFO [RS:4;jenkins-hbase4:34137] regionserver.Replication(203): jenkins-hbase4.apache.org,34137,1690146686390 started 2023-07-23 21:11:26,596 INFO [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34137,1690146686390, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34137, sessionid=0x1019405901c002a 2023-07-23 21:11:26,596 DEBUG [RS:4;jenkins-hbase4:34137] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:11:26,596 DEBUG [RS:4;jenkins-hbase4:34137] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,596 DEBUG [RS:4;jenkins-hbase4:34137] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34137,1690146686390' 2023-07-23 21:11:26,596 DEBUG [RS:4;jenkins-hbase4:34137] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:11:26,597 DEBUG [RS:4;jenkins-hbase4:34137] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:11:26,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:11:26,597 DEBUG [RS:4;jenkins-hbase4:34137] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:11:26,597 DEBUG [RS:4;jenkins-hbase4:34137] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:11:26,597 DEBUG [RS:4;jenkins-hbase4:34137] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,597 DEBUG [RS:4;jenkins-hbase4:34137] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34137,1690146686390' 2023-07-23 21:11:26,597 DEBUG [RS:4;jenkins-hbase4:34137] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:11:26,597 DEBUG [RS:4;jenkins-hbase4:34137] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:11:26,598 DEBUG [RS:4;jenkins-hbase4:34137] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:11:26,598 INFO [RS:4;jenkins-hbase4:34137] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:11:26,598 INFO [RS:4;jenkins-hbase4:34137] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:11:26,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:26,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:26,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:26,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:26,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:26,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:26,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40555] to rsgroup master 2023-07-23 21:11:26,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:26,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] ipc.CallRunner(144): callId: 104 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:33512 deadline: 1690147886608, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40555 is either offline or it does not exist. 2023-07-23 21:11:26,608 WARN [Listener at localhost/38995] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor65.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:11:26,613 INFO [Listener at localhost/38995] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:26,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:26,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:26,614 INFO [Listener at localhost/38995] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34137, jenkins-hbase4.apache.org:38679, jenkins-hbase4.apache.org:40573, jenkins-hbase4.apache.org:45513], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:11:26,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:26,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:26,637 INFO [Listener at localhost/38995] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=584 (was 551) - Thread LEAK? -, OpenFileDescriptor=896 (was 840) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=420 (was 420), ProcessCount=173 (was 173), AvailableMemoryMB=7851 (was 7866) 2023-07-23 21:11:26,637 WARN [Listener at localhost/38995] hbase.ResourceChecker(130): Thread=584 is superior to 500 2023-07-23 21:11:26,637 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-23 21:11:26,637 INFO [Listener at localhost/38995] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 21:11:26,637 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1ee10034 to 127.0.0.1:59847 2023-07-23 21:11:26,637 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,637 DEBUG [Listener at localhost/38995] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 21:11:26,637 DEBUG [Listener at localhost/38995] util.JVMClusterUtil(257): Found active master hash=886326475, stopped=false 2023-07-23 21:11:26,638 DEBUG [Listener at localhost/38995] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 21:11:26,638 DEBUG [Listener at localhost/38995] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 21:11:26,638 INFO [Listener at localhost/38995] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:26,640 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:26,640 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:26,640 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:26,640 INFO [Listener at localhost/38995] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 21:11:26,640 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:26,640 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:26,641 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:26,641 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:26,641 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:26,642 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:26,643 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:26,643 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:26,643 DEBUG [Listener at localhost/38995] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x452f343f to 127.0.0.1:59847 2023-07-23 21:11:26,643 DEBUG [Listener at localhost/38995] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,644 INFO [Listener at localhost/38995] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40573,1690146674972' ***** 2023-07-23 21:11:26,644 INFO [Listener at localhost/38995] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:11:26,644 INFO [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:26,644 INFO [Listener at localhost/38995] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45513,1690146675147' ***** 2023-07-23 21:11:26,645 INFO [Listener at localhost/38995] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:11:26,645 INFO [Listener at localhost/38995] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38679,1690146684514' ***** 2023-07-23 21:11:26,645 INFO [Listener at localhost/38995] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:11:26,645 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:26,645 INFO [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:26,645 INFO [Listener at localhost/38995] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34137,1690146686390' ***** 2023-07-23 21:11:26,646 INFO [Listener at localhost/38995] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:11:26,646 INFO [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:26,649 INFO [RS:1;jenkins-hbase4:40573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1990cee4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:26,650 INFO [RS:3;jenkins-hbase4:38679] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@792095d2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:26,650 INFO [RS:2;jenkins-hbase4:45513] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7bb9c15d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:26,650 INFO [RS:1;jenkins-hbase4:40573] server.AbstractConnector(383): Stopped ServerConnector@727c0667{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:26,650 INFO [RS:1;jenkins-hbase4:40573] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:26,650 INFO [RS:2;jenkins-hbase4:45513] server.AbstractConnector(383): Stopped ServerConnector@7c81bac{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:26,650 INFO [RS:2;jenkins-hbase4:45513] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:26,650 INFO [RS:3;jenkins-hbase4:38679] server.AbstractConnector(383): Stopped ServerConnector@1ee82094{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:26,650 INFO [RS:3;jenkins-hbase4:38679] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:26,651 INFO [RS:1;jenkins-hbase4:40573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e02a8be{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:26,652 INFO [RS:2;jenkins-hbase4:45513] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@831f13d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:26,653 INFO [RS:3;jenkins-hbase4:38679] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@452c2b42{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:26,653 INFO [RS:2;jenkins-hbase4:45513] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@9d64b09{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:26,653 INFO [RS:1;jenkins-hbase4:40573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6ff19d03{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:26,654 INFO [RS:3;jenkins-hbase4:38679] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7f637f0a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:26,654 INFO [RS:4;jenkins-hbase4:34137] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3a1e22cf{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:26,655 INFO [RS:1;jenkins-hbase4:40573] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:11:26,655 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:11:26,655 INFO [RS:1;jenkins-hbase4:40573] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:11:26,655 INFO [RS:1;jenkins-hbase4:40573] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:11:26,655 INFO [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,655 DEBUG [RS:1;jenkins-hbase4:40573] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x38b5f8c1 to 127.0.0.1:59847 2023-07-23 21:11:26,655 DEBUG [RS:1;jenkins-hbase4:40573] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,655 INFO [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40573,1690146674972; all regions closed. 2023-07-23 21:11:26,655 INFO [RS:2;jenkins-hbase4:45513] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:11:26,655 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:11:26,655 INFO [RS:4;jenkins-hbase4:34137] server.AbstractConnector(383): Stopped ServerConnector@7602d688{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:26,655 INFO [RS:4;jenkins-hbase4:34137] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:26,659 INFO [RS:3;jenkins-hbase4:38679] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:11:26,659 INFO [RS:3;jenkins-hbase4:38679] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:11:26,659 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:11:26,659 INFO [RS:3;jenkins-hbase4:38679] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:11:26,659 INFO [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,659 DEBUG [RS:3;jenkins-hbase4:38679] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1efd747d to 127.0.0.1:59847 2023-07-23 21:11:26,659 DEBUG [RS:3;jenkins-hbase4:38679] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,659 INFO [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38679,1690146684514; all regions closed. 2023-07-23 21:11:26,664 INFO [RS:2;jenkins-hbase4:45513] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:11:26,665 INFO [RS:2;jenkins-hbase4:45513] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:11:26,665 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(3305): Received CLOSE for cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:26,668 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(3305): Received CLOSE for 674d6b4e3c5d6a4f0860e9c874b3e183 2023-07-23 21:11:26,668 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(3305): Received CLOSE for 99f4bb247673f611dc82de993563e38b 2023-07-23 21:11:26,668 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:26,668 DEBUG [RS:2;jenkins-hbase4:45513] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7de674e9 to 127.0.0.1:59847 2023-07-23 21:11:26,668 DEBUG [RS:2;jenkins-hbase4:45513] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,668 INFO [RS:2;jenkins-hbase4:45513] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:11:26,668 INFO [RS:2;jenkins-hbase4:45513] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:11:26,668 INFO [RS:2;jenkins-hbase4:45513] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:11:26,668 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 21:11:26,672 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-23 21:11:26,672 DEBUG [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1478): Online Regions={cfdae6c1dde0d9be1f26f623634660ba=hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba., 674d6b4e3c5d6a4f0860e9c874b3e183=hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183., 1588230740=hbase:meta,,1.1588230740, 99f4bb247673f611dc82de993563e38b=hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b.} 2023-07-23 21:11:26,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cfdae6c1dde0d9be1f26f623634660ba, disabling compactions & flushes 2023-07-23 21:11:26,673 INFO [RS:4;jenkins-hbase4:34137] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@23b845ee{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:26,674 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:26,674 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:11:26,675 INFO [RS:4;jenkins-hbase4:34137] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7fe92a34{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:26,674 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 21:11:26,677 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 21:11:26,677 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 21:11:26,674 DEBUG [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1504): Waiting on 1588230740, 674d6b4e3c5d6a4f0860e9c874b3e183, 99f4bb247673f611dc82de993563e38b, cfdae6c1dde0d9be1f26f623634660ba 2023-07-23 21:11:26,678 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 21:11:26,679 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 21:11:26,679 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.48 KB 2023-07-23 21:11:26,675 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:26,675 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:26,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. after waiting 0 ms 2023-07-23 21:11:26,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:26,680 DEBUG [RS:3;jenkins-hbase4:38679] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:11:26,680 INFO [RS:3;jenkins-hbase4:38679] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38679%2C1690146684514:(num 1690146684819) 2023-07-23 21:11:26,680 DEBUG [RS:3;jenkins-hbase4:38679] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,680 INFO [RS:3;jenkins-hbase4:38679] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:26,681 INFO [RS:4;jenkins-hbase4:34137] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:11:26,681 INFO [RS:4;jenkins-hbase4:34137] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:11:26,681 INFO [RS:4;jenkins-hbase4:34137] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:11:26,681 INFO [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,681 DEBUG [RS:4;jenkins-hbase4:34137] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6ddf0ea2 to 127.0.0.1:59847 2023-07-23 21:11:26,682 DEBUG [RS:4;jenkins-hbase4:34137] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,682 INFO [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34137,1690146686390; all regions closed. 2023-07-23 21:11:26,682 DEBUG [RS:4;jenkins-hbase4:34137] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,682 INFO [RS:3;jenkins-hbase4:38679] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:26,682 INFO [RS:3;jenkins-hbase4:38679] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:11:26,682 INFO [RS:4;jenkins-hbase4:34137] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:26,682 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:26,682 INFO [RS:3;jenkins-hbase4:38679] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:11:26,682 INFO [RS:3;jenkins-hbase4:38679] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:11:26,683 INFO [RS:3;jenkins-hbase4:38679] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38679 2023-07-23 21:11:26,684 INFO [RS:4;jenkins-hbase4:34137] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:26,684 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:26,684 INFO [RS:4;jenkins-hbase4:34137] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:11:26,685 INFO [RS:4;jenkins-hbase4:34137] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:11:26,685 INFO [RS:4;jenkins-hbase4:34137] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:11:26,686 INFO [RS:4;jenkins-hbase4:34137] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34137 2023-07-23 21:11:26,686 DEBUG [RS:1;jenkins-hbase4:40573] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:11:26,686 INFO [RS:1;jenkins-hbase4:40573] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40573%2C1690146674972.meta:.meta(num 1690146675910) 2023-07-23 21:11:26,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/namespace/cfdae6c1dde0d9be1f26f623634660ba/recovered.edits/20.seqid, newMaxSeqId=20, maxSeqId=17 2023-07-23 21:11:26,694 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:26,694 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:26,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cfdae6c1dde0d9be1f26f623634660ba: 2023-07-23 21:11:26,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690146644614.cfdae6c1dde0d9be1f26f623634660ba. 2023-07-23 21:11:26,696 DEBUG [RS:1;jenkins-hbase4:40573] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:11:26,696 INFO [RS:1;jenkins-hbase4:40573] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40573%2C1690146674972:(num 1690146675836) 2023-07-23 21:11:26,696 DEBUG [RS:1;jenkins-hbase4:40573] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,696 INFO [RS:1;jenkins-hbase4:40573] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:26,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 674d6b4e3c5d6a4f0860e9c874b3e183, disabling compactions & flushes 2023-07-23 21:11:26,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:26,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:26,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. after waiting 0 ms 2023-07-23 21:11:26,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:26,699 INFO [RS:1;jenkins-hbase4:40573] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:26,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 674d6b4e3c5d6a4f0860e9c874b3e183 1/1 column families, dataSize=2.06 KB heapSize=3.48 KB 2023-07-23 21:11:26,699 INFO [RS:1;jenkins-hbase4:40573] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:11:26,699 INFO [RS:1;jenkins-hbase4:40573] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:11:26,699 INFO [RS:1;jenkins-hbase4:40573] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:11:26,699 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:26,700 INFO [RS:1;jenkins-hbase4:40573] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40573 2023-07-23 21:11:26,704 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=180 (bloomFilter=false), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/info/e2c4453343854b72810dc8f0fd27241c 2023-07-23 21:11:26,710 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/.tmp/info/e2c4453343854b72810dc8f0fd27241c as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/e2c4453343854b72810dc8f0fd27241c 2023-07-23 21:11:26,713 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:26,713 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:26,716 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/e2c4453343854b72810dc8f0fd27241c, entries=20, sequenceid=180, filesize=7.0 K 2023-07-23 21:11:26,717 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2314, heapSize ~3.97 KB/4064, currentSize=0 B/0 for 1588230740 in 38ms, sequenceid=180, compaction requested=false 2023-07-23 21:11:26,727 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-23 21:11:26,728 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/15c38b0ea71c46adb63b62b92a154f8d, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/0319ae9753684fde96cc22ac6aee2994, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/7ca6f0427221406095057d8a1c4eb7ba] to archive 2023-07-23 21:11:26,728 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-23 21:11:26,729 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-23 21:11:26,731 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.06 KB at sequenceid=108 (bloomFilter=true), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/.tmp/m/372a0c1f68f044a88d79b9e189dcbbfb 2023-07-23 21:11:26,731 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/15c38b0ea71c46adb63b62b92a154f8d to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/hbase/meta/1588230740/info/15c38b0ea71c46adb63b62b92a154f8d 2023-07-23 21:11:26,733 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/0319ae9753684fde96cc22ac6aee2994 to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/hbase/meta/1588230740/info/0319ae9753684fde96cc22ac6aee2994 2023-07-23 21:11:26,735 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/info/7ca6f0427221406095057d8a1c4eb7ba to hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/archive/data/hbase/meta/1588230740/info/7ca6f0427221406095057d8a1c4eb7ba 2023-07-23 21:11:26,744 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 372a0c1f68f044a88d79b9e189dcbbfb 2023-07-23 21:11:26,745 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/.tmp/m/372a0c1f68f044a88d79b9e189dcbbfb as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/372a0c1f68f044a88d79b9e189dcbbfb 2023-07-23 21:11:26,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/meta/1588230740/recovered.edits/183.seqid, newMaxSeqId=183, maxSeqId=170 2023-07-23 21:11:26,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:11:26,749 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 21:11:26,749 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 21:11:26,749 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 21:11:26,750 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 372a0c1f68f044a88d79b9e189dcbbfb 2023-07-23 21:11:26,750 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/m/372a0c1f68f044a88d79b9e189dcbbfb, entries=4, sequenceid=108, filesize=5.3 K 2023-07-23 21:11:26,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.06 KB/2108, heapSize ~3.46 KB/3544, currentSize=0 B/0 for 674d6b4e3c5d6a4f0860e9c874b3e183 in 52ms, sequenceid=108, compaction requested=true 2023-07-23 21:11:26,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/rsgroup/674d6b4e3c5d6a4f0860e9c874b3e183/recovered.edits/111.seqid, newMaxSeqId=111, maxSeqId=98 2023-07-23 21:11:26,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:11:26,759 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:26,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 674d6b4e3c5d6a4f0860e9c874b3e183: 2023-07-23 21:11:26,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690146644773.674d6b4e3c5d6a4f0860e9c874b3e183. 2023-07-23 21:11:26,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 99f4bb247673f611dc82de993563e38b, disabling compactions & flushes 2023-07-23 21:11:26,760 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:26,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:26,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. after waiting 0 ms 2023-07-23 21:11:26,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:26,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/data/hbase/quota/99f4bb247673f611dc82de993563e38b/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-23 21:11:26,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:26,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 99f4bb247673f611dc82de993563e38b: 2023-07-23 21:11:26,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690146668289.99f4bb247673f611dc82de993563e38b. 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,764 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,765 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40573,1690146674972 2023-07-23 21:11:26,765 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,765 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34137,1690146686390 2023-07-23 21:11:26,765 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38679,1690146684514 2023-07-23 21:11:26,765 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38679,1690146684514] 2023-07-23 21:11:26,765 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38679,1690146684514; numProcessing=1 2023-07-23 21:11:26,770 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38679,1690146684514 already deleted, retry=false 2023-07-23 21:11:26,770 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38679,1690146684514 expired; onlineServers=3 2023-07-23 21:11:26,770 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40573,1690146674972] 2023-07-23 21:11:26,770 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40573,1690146674972; numProcessing=2 2023-07-23 21:11:26,771 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40573,1690146674972 already deleted, retry=false 2023-07-23 21:11:26,771 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40573,1690146674972 expired; onlineServers=2 2023-07-23 21:11:26,771 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34137,1690146686390] 2023-07-23 21:11:26,771 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34137,1690146686390; numProcessing=3 2023-07-23 21:11:26,772 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34137,1690146686390 already deleted, retry=false 2023-07-23 21:11:26,772 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34137,1690146686390 expired; onlineServers=1 2023-07-23 21:11:26,879 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45513,1690146675147; all regions closed. 2023-07-23 21:11:26,884 DEBUG [RS:2;jenkins-hbase4:45513] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:11:26,885 INFO [RS:2;jenkins-hbase4:45513] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45513%2C1690146675147.meta:.meta(num 1690146685392) 2023-07-23 21:11:26,891 DEBUG [RS:2;jenkins-hbase4:45513] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/oldWALs 2023-07-23 21:11:26,891 INFO [RS:2;jenkins-hbase4:45513] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45513%2C1690146675147:(num 1690146675840) 2023-07-23 21:11:26,891 DEBUG [RS:2;jenkins-hbase4:45513] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,891 INFO [RS:2;jenkins-hbase4:45513] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:26,891 INFO [RS:2;jenkins-hbase4:45513] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:26,891 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:26,892 INFO [RS:2;jenkins-hbase4:45513] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45513 2023-07-23 21:11:26,894 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45513,1690146675147 2023-07-23 21:11:26,894 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:26,894 ERROR [Listener at localhost/38995-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@174d56bc rejected from java.util.concurrent.ThreadPoolExecutor@1372c08d[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 12] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1374) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-23 21:11:26,896 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45513,1690146675147] 2023-07-23 21:11:26,896 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45513,1690146675147; numProcessing=4 2023-07-23 21:11:26,897 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45513,1690146675147 already deleted, retry=false 2023-07-23 21:11:26,897 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45513,1690146675147 expired; onlineServers=0 2023-07-23 21:11:26,897 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40555,1690146674547' ***** 2023-07-23 21:11:26,898 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 21:11:26,898 DEBUG [M:0;jenkins-hbase4:40555] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30ebc689, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:26,898 INFO [M:0;jenkins-hbase4:40555] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:26,899 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:26,899 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:26,900 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:26,900 INFO [M:0;jenkins-hbase4:40555] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@29b4e5f5{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 21:11:26,900 INFO [M:0;jenkins-hbase4:40555] server.AbstractConnector(383): Stopped ServerConnector@731c75e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:26,900 INFO [M:0;jenkins-hbase4:40555] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:26,901 INFO [M:0;jenkins-hbase4:40555] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@78cc12d1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:26,901 INFO [M:0;jenkins-hbase4:40555] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@177329fd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:26,902 INFO [M:0;jenkins-hbase4:40555] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40555,1690146674547 2023-07-23 21:11:26,902 INFO [M:0;jenkins-hbase4:40555] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40555,1690146674547; all regions closed. 2023-07-23 21:11:26,902 DEBUG [M:0;jenkins-hbase4:40555] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:26,902 INFO [M:0;jenkins-hbase4:40555] master.HMaster(1491): Stopping master jetty server 2023-07-23 21:11:26,902 INFO [M:0;jenkins-hbase4:40555] server.AbstractConnector(383): Stopped ServerConnector@34f59e01{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:26,903 DEBUG [M:0;jenkins-hbase4:40555] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 21:11:26,903 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 21:11:26,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146675555] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146675555,5,FailOnTimeoutGroup] 2023-07-23 21:11:26,903 DEBUG [M:0;jenkins-hbase4:40555] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 21:11:26,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146675555] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146675555,5,FailOnTimeoutGroup] 2023-07-23 21:11:26,903 INFO [M:0;jenkins-hbase4:40555] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 21:11:26,903 INFO [M:0;jenkins-hbase4:40555] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 21:11:26,903 INFO [M:0;jenkins-hbase4:40555] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-23 21:11:26,903 DEBUG [M:0;jenkins-hbase4:40555] master.HMaster(1512): Stopping service threads 2023-07-23 21:11:26,903 INFO [M:0;jenkins-hbase4:40555] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 21:11:26,903 ERROR [M:0;jenkins-hbase4:40555] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-23 21:11:26,904 INFO [M:0;jenkins-hbase4:40555] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 21:11:26,904 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 21:11:26,904 DEBUG [M:0;jenkins-hbase4:40555] zookeeper.ZKUtil(398): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 21:11:26,904 WARN [M:0;jenkins-hbase4:40555] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 21:11:26,904 INFO [M:0;jenkins-hbase4:40555] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 21:11:26,904 INFO [M:0;jenkins-hbase4:40555] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 21:11:26,905 DEBUG [M:0;jenkins-hbase4:40555] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 21:11:26,905 INFO [M:0;jenkins-hbase4:40555] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:26,905 DEBUG [M:0;jenkins-hbase4:40555] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:26,905 DEBUG [M:0;jenkins-hbase4:40555] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 21:11:26,905 DEBUG [M:0;jenkins-hbase4:40555] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:26,905 INFO [M:0;jenkins-hbase4:40555] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.03 KB heapSize=78.87 KB 2023-07-23 21:11:26,915 INFO [M:0;jenkins-hbase4:40555] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.03 KB at sequenceid=1083 (bloomFilter=true), to=hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9adc989afd35431db52471215a6b4750 2023-07-23 21:11:26,920 DEBUG [M:0;jenkins-hbase4:40555] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9adc989afd35431db52471215a6b4750 as hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9adc989afd35431db52471215a6b4750 2023-07-23 21:11:26,925 INFO [M:0;jenkins-hbase4:40555] regionserver.HStore(1080): Added hdfs://localhost:32841/user/jenkins/test-data/7284a508-9049-9674-4759-00d4971a3914/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9adc989afd35431db52471215a6b4750, entries=21, sequenceid=1083, filesize=7.8 K 2023-07-23 21:11:26,926 INFO [M:0;jenkins-hbase4:40555] regionserver.HRegion(2948): Finished flush of dataSize ~64.03 KB/65568, heapSize ~78.85 KB/80744, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=1083, compaction requested=true 2023-07-23 21:11:26,928 INFO [M:0;jenkins-hbase4:40555] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:26,928 DEBUG [M:0;jenkins-hbase4:40555] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:11:26,932 INFO [M:0;jenkins-hbase4:40555] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 21:11:26,932 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:26,933 INFO [M:0;jenkins-hbase4:40555] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40555 2023-07-23 21:11:26,935 DEBUG [M:0;jenkins-hbase4:40555] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40555,1690146674547 already deleted, retry=false 2023-07-23 21:11:27,040 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:27,040 INFO [M:0;jenkins-hbase4:40555] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40555,1690146674547; zookeeper connection closed. 2023-07-23 21:11:27,040 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): master:40555-0x1019405901c001c, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:27,140 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:27,140 INFO [RS:2;jenkins-hbase4:45513] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45513,1690146675147; zookeeper connection closed. 2023-07-23 21:11:27,140 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:45513-0x1019405901c001f, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:27,141 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@757ba8b0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@757ba8b0 2023-07-23 21:11:27,240 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:27,240 INFO [RS:4;jenkins-hbase4:34137] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34137,1690146686390; zookeeper connection closed. 2023-07-23 21:11:27,241 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:34137-0x1019405901c002a, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:27,241 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@684daf74] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@684daf74 2023-07-23 21:11:27,341 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:27,341 INFO [RS:3;jenkins-hbase4:38679] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38679,1690146684514; zookeeper connection closed. 2023-07-23 21:11:27,341 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:38679-0x1019405901c0028, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:27,341 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5ede30c1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5ede30c1 2023-07-23 21:11:27,441 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:27,441 INFO [RS:1;jenkins-hbase4:40573] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40573,1690146674972; zookeeper connection closed. 2023-07-23 21:11:27,441 DEBUG [Listener at localhost/38995-EventThread] zookeeper.ZKWatcher(600): regionserver:40573-0x1019405901c001e, quorum=127.0.0.1:59847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:27,441 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6ed31046] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6ed31046 2023-07-23 21:11:27,441 INFO [Listener at localhost/38995] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-23 21:11:27,442 WARN [Listener at localhost/38995] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 21:11:27,454 INFO [Listener at localhost/38995] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:11:27,561 WARN [BP-1946696265-172.31.14.131-1690146636244 heartbeating to localhost/127.0.0.1:32841] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 21:11:27,561 WARN [BP-1946696265-172.31.14.131-1690146636244 heartbeating to localhost/127.0.0.1:32841] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1946696265-172.31.14.131-1690146636244 (Datanode Uuid a1b981de-5b65-4d22-9fec-4f78943f74e4) service to localhost/127.0.0.1:32841 2023-07-23 21:11:27,564 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/dfs/data/data5/current/BP-1946696265-172.31.14.131-1690146636244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:11:27,564 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/dfs/data/data6/current/BP-1946696265-172.31.14.131-1690146636244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:11:27,566 WARN [Listener at localhost/38995] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 21:11:27,569 INFO [Listener at localhost/38995] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:11:27,673 WARN [BP-1946696265-172.31.14.131-1690146636244 heartbeating to localhost/127.0.0.1:32841] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 21:11:27,673 WARN [BP-1946696265-172.31.14.131-1690146636244 heartbeating to localhost/127.0.0.1:32841] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1946696265-172.31.14.131-1690146636244 (Datanode Uuid 6cbb2b85-d5f1-47fa-94cc-60f17130e30b) service to localhost/127.0.0.1:32841 2023-07-23 21:11:27,674 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/dfs/data/data3/current/BP-1946696265-172.31.14.131-1690146636244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:11:27,674 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/dfs/data/data4/current/BP-1946696265-172.31.14.131-1690146636244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:11:27,676 WARN [Listener at localhost/38995] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 21:11:27,680 INFO [Listener at localhost/38995] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:11:27,785 WARN [BP-1946696265-172.31.14.131-1690146636244 heartbeating to localhost/127.0.0.1:32841] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 21:11:27,785 WARN [BP-1946696265-172.31.14.131-1690146636244 heartbeating to localhost/127.0.0.1:32841] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1946696265-172.31.14.131-1690146636244 (Datanode Uuid 4beab255-6c9c-4249-939b-72fa2a908107) service to localhost/127.0.0.1:32841 2023-07-23 21:11:27,785 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/dfs/data/data1/current/BP-1946696265-172.31.14.131-1690146636244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:11:27,786 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/358ce173-37d9-a63b-bded-850742491fb8/cluster_e9a7d46f-9dca-4247-1d61-24f6d2a37392/dfs/data/data2/current/BP-1946696265-172.31.14.131-1690146636244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:11:27,814 INFO [Listener at localhost/38995] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:11:27,942 INFO [Listener at localhost/38995] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-23 21:11:28,000 INFO [Listener at localhost/38995] hbase.HBaseTestingUtility(1293): Minicluster is down