2023-07-24 04:10:44,762 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43 2023-07-24 04:10:44,780 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics timeout: 13 mins 2023-07-24 04:10:44,801 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 04:10:44,801 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c, deleteOnExit=true 2023-07-24 04:10:44,802 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 04:10:44,802 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/test.cache.data in system properties and HBase conf 2023-07-24 04:10:44,802 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 04:10:44,803 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir in system properties and HBase conf 2023-07-24 04:10:44,803 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 04:10:44,804 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 04:10:44,804 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 04:10:44,941 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-24 04:10:45,477 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 04:10:45,482 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 04:10:45,482 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 04:10:45,482 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 04:10:45,483 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 04:10:45,483 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 04:10:45,483 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 04:10:45,484 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 04:10:45,484 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 04:10:45,484 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 04:10:45,485 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/nfs.dump.dir in system properties and HBase conf 2023-07-24 04:10:45,485 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/java.io.tmpdir in system properties and HBase conf 2023-07-24 04:10:45,486 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 04:10:45,486 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 04:10:45,486 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 04:10:46,082 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 04:10:46,087 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 04:10:46,450 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-24 04:10:46,656 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-24 04:10:46,672 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 04:10:46,720 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 04:10:46,756 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/java.io.tmpdir/Jetty_localhost_36529_hdfs____.6qz599/webapp 2023-07-24 04:10:46,911 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36529 2023-07-24 04:10:46,960 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 04:10:46,960 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 04:10:47,561 WARN [Listener at localhost/42399] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 04:10:47,650 WARN [Listener at localhost/42399] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 04:10:47,682 WARN [Listener at localhost/42399] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 04:10:47,691 INFO [Listener at localhost/42399] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 04:10:47,713 INFO [Listener at localhost/42399] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/java.io.tmpdir/Jetty_localhost_36541_datanode____qgnwk6/webapp 2023-07-24 04:10:47,840 INFO [Listener at localhost/42399] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36541 2023-07-24 04:10:48,420 WARN [Listener at localhost/45421] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 04:10:48,508 WARN [Listener at localhost/45421] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 04:10:48,514 WARN [Listener at localhost/45421] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 04:10:48,516 INFO [Listener at localhost/45421] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 04:10:48,527 INFO [Listener at localhost/45421] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/java.io.tmpdir/Jetty_localhost_37087_datanode____.r4y7ny/webapp 2023-07-24 04:10:48,638 INFO [Listener at localhost/45421] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37087 2023-07-24 04:10:48,648 WARN [Listener at localhost/46333] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 04:10:48,696 WARN [Listener at localhost/46333] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 04:10:48,700 WARN [Listener at localhost/46333] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 04:10:48,703 INFO [Listener at localhost/46333] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 04:10:48,711 INFO [Listener at localhost/46333] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/java.io.tmpdir/Jetty_localhost_35313_datanode____f4ml6/webapp 2023-07-24 04:10:48,820 INFO [Listener at localhost/46333] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35313 2023-07-24 04:10:48,831 WARN [Listener at localhost/41307] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 04:10:49,064 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7a827ef334537f14: Processing first storage report for DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac from datanode 267eb9ad-ba53-4f9f-a855-80a939a2da6d 2023-07-24 04:10:49,066 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7a827ef334537f14: from storage DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac node DatanodeRegistration(127.0.0.1:45555, datanodeUuid=267eb9ad-ba53-4f9f-a855-80a939a2da6d, infoPort=39295, infoSecurePort=0, ipcPort=46333, storageInfo=lv=-57;cid=testClusterID;nsid=1712598785;c=1690171846162), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-24 04:10:49,066 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9b3314b4ecd96d0b: Processing first storage report for DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a from datanode 7c96df4f-6d55-4465-8844-bd97b2788d10 2023-07-24 04:10:49,066 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9b3314b4ecd96d0b: from storage DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a node DatanodeRegistration(127.0.0.1:39051, datanodeUuid=7c96df4f-6d55-4465-8844-bd97b2788d10, infoPort=34415, infoSecurePort=0, ipcPort=45421, storageInfo=lv=-57;cid=testClusterID;nsid=1712598785;c=1690171846162), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-24 04:10:49,067 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xde78d5a160db0daa: Processing first storage report for DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29 from datanode 6fb9a989-2092-4618-92a6-19c5d5216065 2023-07-24 04:10:49,067 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xde78d5a160db0daa: from storage DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29 node DatanodeRegistration(127.0.0.1:40837, datanodeUuid=6fb9a989-2092-4618-92a6-19c5d5216065, infoPort=35899, infoSecurePort=0, ipcPort=41307, storageInfo=lv=-57;cid=testClusterID;nsid=1712598785;c=1690171846162), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 04:10:49,067 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7a827ef334537f14: Processing first storage report for DS-e2fa19f7-7380-43ac-93a5-d9594f08127d from datanode 267eb9ad-ba53-4f9f-a855-80a939a2da6d 2023-07-24 04:10:49,067 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7a827ef334537f14: from storage DS-e2fa19f7-7380-43ac-93a5-d9594f08127d node DatanodeRegistration(127.0.0.1:45555, datanodeUuid=267eb9ad-ba53-4f9f-a855-80a939a2da6d, infoPort=39295, infoSecurePort=0, ipcPort=46333, storageInfo=lv=-57;cid=testClusterID;nsid=1712598785;c=1690171846162), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 04:10:49,067 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9b3314b4ecd96d0b: Processing first storage report for DS-1ac586a5-c687-41de-8488-d26505a74e3c from datanode 7c96df4f-6d55-4465-8844-bd97b2788d10 2023-07-24 04:10:49,067 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9b3314b4ecd96d0b: from storage DS-1ac586a5-c687-41de-8488-d26505a74e3c node DatanodeRegistration(127.0.0.1:39051, datanodeUuid=7c96df4f-6d55-4465-8844-bd97b2788d10, infoPort=34415, infoSecurePort=0, ipcPort=45421, storageInfo=lv=-57;cid=testClusterID;nsid=1712598785;c=1690171846162), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 04:10:49,067 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xde78d5a160db0daa: Processing first storage report for DS-5324a703-bb92-4195-96fb-196e350371a0 from datanode 6fb9a989-2092-4618-92a6-19c5d5216065 2023-07-24 04:10:49,067 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xde78d5a160db0daa: from storage DS-5324a703-bb92-4195-96fb-196e350371a0 node DatanodeRegistration(127.0.0.1:40837, datanodeUuid=6fb9a989-2092-4618-92a6-19c5d5216065, infoPort=35899, infoSecurePort=0, ipcPort=41307, storageInfo=lv=-57;cid=testClusterID;nsid=1712598785;c=1690171846162), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-24 04:10:49,302 DEBUG [Listener at localhost/41307] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43 2023-07-24 04:10:49,384 INFO [Listener at localhost/41307] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/zookeeper_0, clientPort=59235, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 04:10:49,401 INFO [Listener at localhost/41307] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59235 2023-07-24 04:10:49,408 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:49,410 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:50,077 INFO [Listener at localhost/41307] util.FSUtils(471): Created version file at hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca with version=8 2023-07-24 04:10:50,078 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/hbase-staging 2023-07-24 04:10:50,086 DEBUG [Listener at localhost/41307] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 04:10:50,087 DEBUG [Listener at localhost/41307] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 04:10:50,087 DEBUG [Listener at localhost/41307] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 04:10:50,087 DEBUG [Listener at localhost/41307] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 04:10:50,457 INFO [Listener at localhost/41307] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-24 04:10:51,035 INFO [Listener at localhost/41307] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:10:51,085 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:51,085 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:51,086 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:10:51,086 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:51,086 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:10:51,230 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:10:51,329 DEBUG [Listener at localhost/41307] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-24 04:10:51,423 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36883 2023-07-24 04:10:51,436 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:51,437 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:51,461 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36883 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:10:51,513 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:368830x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:10:51,517 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36883-0x10195863d980000 connected 2023-07-24 04:10:51,545 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:10:51,546 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:10:51,550 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:10:51,561 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36883 2023-07-24 04:10:51,561 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36883 2023-07-24 04:10:51,562 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36883 2023-07-24 04:10:51,562 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36883 2023-07-24 04:10:51,563 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36883 2023-07-24 04:10:51,608 INFO [Listener at localhost/41307] log.Log(170): Logging initialized @7907ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-24 04:10:51,757 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:10:51,758 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:10:51,759 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:10:51,761 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 04:10:51,761 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:10:51,761 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:10:51,765 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:10:51,837 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 39353 2023-07-24 04:10:51,839 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:10:51,876 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:51,880 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@78f85e9a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:10:51,881 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:51,881 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b062a14{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:10:51,954 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:10:51,969 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:10:51,969 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:10:51,971 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 04:10:51,981 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:52,018 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@407d85db{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 04:10:52,031 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@7d776eb6{HTTP/1.1, (http/1.1)}{0.0.0.0:39353} 2023-07-24 04:10:52,032 INFO [Listener at localhost/41307] server.Server(415): Started @8331ms 2023-07-24 04:10:52,035 INFO [Listener at localhost/41307] master.HMaster(444): hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca, hbase.cluster.distributed=false 2023-07-24 04:10:52,138 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:10:52,138 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:52,138 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:52,139 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:10:52,139 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:52,139 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:10:52,150 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:10:52,154 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36109 2023-07-24 04:10:52,158 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:10:52,200 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:10:52,202 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:52,205 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:52,207 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36109 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:10:52,213 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:361090x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:10:52,215 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36109-0x10195863d980001 connected 2023-07-24 04:10:52,215 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:10:52,217 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:10:52,217 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:10:52,218 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36109 2023-07-24 04:10:52,218 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36109 2023-07-24 04:10:52,221 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36109 2023-07-24 04:10:52,222 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36109 2023-07-24 04:10:52,222 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36109 2023-07-24 04:10:52,225 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:10:52,225 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:10:52,225 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:10:52,226 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:10:52,226 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:10:52,226 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:10:52,227 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:10:52,229 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 43243 2023-07-24 04:10:52,229 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:10:52,235 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:52,235 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6b81b873{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:10:52,235 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:52,236 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4d96518f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:10:52,251 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:10:52,253 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:10:52,253 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:10:52,254 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 04:10:52,255 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:52,259 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2ff5d8c6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:10:52,260 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@516b648c{HTTP/1.1, (http/1.1)}{0.0.0.0:43243} 2023-07-24 04:10:52,260 INFO [Listener at localhost/41307] server.Server(415): Started @8559ms 2023-07-24 04:10:52,274 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:10:52,274 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:52,274 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:52,275 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:10:52,275 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:52,275 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:10:52,275 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:10:52,277 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37679 2023-07-24 04:10:52,278 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:10:52,279 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:10:52,279 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:52,281 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:52,282 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37679 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:10:52,286 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:376790x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:10:52,287 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37679-0x10195863d980002 connected 2023-07-24 04:10:52,287 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:10:52,288 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:10:52,288 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:10:52,290 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37679 2023-07-24 04:10:52,291 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37679 2023-07-24 04:10:52,294 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37679 2023-07-24 04:10:52,295 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37679 2023-07-24 04:10:52,297 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37679 2023-07-24 04:10:52,301 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:10:52,301 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:10:52,301 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:10:52,301 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:10:52,302 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:10:52,302 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:10:52,302 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:10:52,303 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 36799 2023-07-24 04:10:52,303 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:10:52,304 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:52,304 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6544163a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:10:52,305 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:52,305 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@333febea{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:10:52,317 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:10:52,317 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:10:52,318 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:10:52,318 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 04:10:52,319 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:52,320 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@41e9759c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:10:52,321 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@7eca90ad{HTTP/1.1, (http/1.1)}{0.0.0.0:36799} 2023-07-24 04:10:52,321 INFO [Listener at localhost/41307] server.Server(415): Started @8620ms 2023-07-24 04:10:52,334 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:10:52,334 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:52,334 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:52,335 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:10:52,335 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:52,335 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:10:52,335 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:10:52,336 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41157 2023-07-24 04:10:52,337 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:10:52,338 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:10:52,339 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:52,341 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:52,342 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41157 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:10:52,346 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:411570x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:10:52,347 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:411570x0, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:10:52,348 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:411570x0, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:10:52,349 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:411570x0, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:10:52,352 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41157-0x10195863d980003 connected 2023-07-24 04:10:52,354 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41157 2023-07-24 04:10:52,354 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41157 2023-07-24 04:10:52,355 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41157 2023-07-24 04:10:52,356 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41157 2023-07-24 04:10:52,356 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41157 2023-07-24 04:10:52,359 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:10:52,359 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:10:52,359 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:10:52,360 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:10:52,360 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:10:52,360 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:10:52,361 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:10:52,361 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 39025 2023-07-24 04:10:52,362 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:10:52,363 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:52,364 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1da587c4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:10:52,364 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:52,365 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@19d64a7a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:10:52,376 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:10:52,377 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:10:52,377 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:10:52,377 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 04:10:52,378 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:52,379 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@45df417c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:10:52,381 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@62eabc6f{HTTP/1.1, (http/1.1)}{0.0.0.0:39025} 2023-07-24 04:10:52,381 INFO [Listener at localhost/41307] server.Server(415): Started @8680ms 2023-07-24 04:10:52,392 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:10:52,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@406a344{HTTP/1.1, (http/1.1)}{0.0.0.0:33355} 2023-07-24 04:10:52,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8698ms 2023-07-24 04:10:52,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36883,1690171850269 2023-07-24 04:10:52,412 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 04:10:52,414 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36883,1690171850269 2023-07-24 04:10:52,437 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 04:10:52,437 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 04:10:52,437 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 04:10:52,437 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 04:10:52,439 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:10:52,439 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 04:10:52,441 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36883,1690171850269 from backup master directory 2023-07-24 04:10:52,442 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 04:10:52,447 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36883,1690171850269 2023-07-24 04:10:52,448 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 04:10:52,448 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:10:52,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36883,1690171850269 2023-07-24 04:10:52,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-24 04:10:52,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-24 04:10:52,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/hbase.id with ID: be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:10:52,616 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:52,632 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:10:52,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x271197f5 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:10:52,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@688b7928, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:10:52,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 04:10:52,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 04:10:52,782 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-24 04:10:52,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-24 04:10:52,785 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:10:52,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:10:52,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:10:52,841 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store-tmp 2023-07-24 04:10:52,899 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:52,899 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 04:10:52,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:10:52,899 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:10:52,899 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 04:10:52,899 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:10:52,900 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:10:52,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 04:10:52,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,36883,1690171850269 2023-07-24 04:10:52,933 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36883%2C1690171850269, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,36883,1690171850269, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/oldWALs, maxLogs=10 2023-07-24 04:10:53,024 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:10:53,024 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:10:53,056 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:10:53,064 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:10:53,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,36883,1690171850269/jenkins-hbase4.apache.org%2C36883%2C1690171850269.1690171852948 2023-07-24 04:10:53,181 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK]] 2023-07-24 04:10:53,182 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:10:53,183 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:53,188 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:10:53,190 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:10:53,287 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:10:53,294 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 04:10:53,335 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 04:10:53,349 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:53,354 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:10:53,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:10:53,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:10:53,384 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:53,385 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9695201280, jitterRate=-0.09706401824951172}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:53,385 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 04:10:53,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 04:10:53,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 04:10:53,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 04:10:53,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 04:10:53,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-24 04:10:53,469 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 41 msec 2023-07-24 04:10:53,469 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 04:10:53,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 04:10:53,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 04:10:53,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 04:10:53,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 04:10:53,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 04:10:53,526 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:10:53,527 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 04:10:53,527 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 04:10:53,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 04:10:53,546 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 04:10:53,547 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 04:10:53,547 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 04:10:53,546 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 04:10:53,547 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:10:53,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36883,1690171850269, sessionid=0x10195863d980000, setting cluster-up flag (Was=false) 2023-07-24 04:10:53,566 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:10:53,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 04:10:53,572 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36883,1690171850269 2023-07-24 04:10:53,578 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:10:53,585 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 04:10:53,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36883,1690171850269 2023-07-24 04:10:53,589 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.hbase-snapshot/.tmp 2023-07-24 04:10:53,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 04:10:53,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 04:10:53,683 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:10:53,685 INFO [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:10:53,686 INFO [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:10:53,685 INFO [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:10:53,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 04:10:53,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 04:10:53,692 DEBUG [RS:1;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:10:53,692 DEBUG [RS:0;jenkins-hbase4:36109] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:10:53,692 DEBUG [RS:2;jenkins-hbase4:41157] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:10:53,700 DEBUG [RS:2;jenkins-hbase4:41157] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:10:53,700 DEBUG [RS:2;jenkins-hbase4:41157] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:10:53,702 DEBUG [RS:0;jenkins-hbase4:36109] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:10:53,702 DEBUG [RS:0;jenkins-hbase4:36109] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:10:53,706 DEBUG [RS:1;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:10:53,706 DEBUG [RS:1;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:10:53,708 DEBUG [RS:0;jenkins-hbase4:36109] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:10:53,710 DEBUG [RS:2;jenkins-hbase4:41157] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:10:53,711 DEBUG [RS:0;jenkins-hbase4:36109] zookeeper.ReadOnlyZKClient(139): Connect 0x244fd597 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:10:53,714 DEBUG [RS:2;jenkins-hbase4:41157] zookeeper.ReadOnlyZKClient(139): Connect 0x7779fbd0 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:10:53,715 DEBUG [RS:1;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:10:53,731 DEBUG [RS:1;jenkins-hbase4:37679] zookeeper.ReadOnlyZKClient(139): Connect 0x7ddd3abf to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:10:53,758 DEBUG [RS:2;jenkins-hbase4:41157] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@79be1b4f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:10:53,760 DEBUG [RS:2;jenkins-hbase4:41157] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d8190a0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:10:53,762 DEBUG [RS:0;jenkins-hbase4:36109] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a882f6e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:10:53,762 DEBUG [RS:0;jenkins-hbase4:36109] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2eb7bee7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:10:53,765 DEBUG [RS:1;jenkins-hbase4:37679] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@731acb74, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:10:53,766 DEBUG [RS:1;jenkins-hbase4:37679] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3227c14f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:10:53,790 DEBUG [RS:0;jenkins-hbase4:36109] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:36109 2023-07-24 04:10:53,790 DEBUG [RS:1;jenkins-hbase4:37679] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:37679 2023-07-24 04:10:53,792 DEBUG [RS:2;jenkins-hbase4:41157] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41157 2023-07-24 04:10:53,798 INFO [RS:1;jenkins-hbase4:37679] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:10:53,798 INFO [RS:1;jenkins-hbase4:37679] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:10:53,798 INFO [RS:0;jenkins-hbase4:36109] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:10:53,800 INFO [RS:0;jenkins-hbase4:36109] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:10:53,800 DEBUG [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:10:53,799 INFO [RS:2;jenkins-hbase4:41157] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:10:53,801 INFO [RS:2;jenkins-hbase4:41157] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:10:53,801 DEBUG [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:10:53,800 DEBUG [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:10:53,805 INFO [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36883,1690171850269 with isa=jenkins-hbase4.apache.org/172.31.14.131:36109, startcode=1690171852137 2023-07-24 04:10:53,805 INFO [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36883,1690171850269 with isa=jenkins-hbase4.apache.org/172.31.14.131:37679, startcode=1690171852273 2023-07-24 04:10:53,805 INFO [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36883,1690171850269 with isa=jenkins-hbase4.apache.org/172.31.14.131:41157, startcode=1690171852333 2023-07-24 04:10:53,839 DEBUG [RS:2;jenkins-hbase4:41157] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:10:53,840 DEBUG [RS:0;jenkins-hbase4:36109] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:10:53,839 DEBUG [RS:1;jenkins-hbase4:37679] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:10:53,853 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 04:10:53,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 04:10:53,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 04:10:53,915 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 04:10:53,915 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 04:10:53,916 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 04:10:53,916 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 04:10:53,916 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 04:10:53,916 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 04:10:53,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 04:10:53,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:53,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:10:53,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:53,919 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35479, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:10:53,919 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41943, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:10:53,919 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43917, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:10:53,919 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690171883919 2023-07-24 04:10:53,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 04:10:53,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 04:10:53,928 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 04:10:53,929 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 04:10:53,930 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:10:53,932 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 04:10:53,973 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 04:10:53,974 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 04:10:53,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 04:10:53,975 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:10:53,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 04:10:53,977 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:10:53,978 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:53,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 04:10:53,982 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 04:10:53,983 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 04:10:53,985 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 04:10:53,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 04:10:53,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690171853987,5,FailOnTimeoutGroup] 2023-07-24 04:10:53,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690171853988,5,FailOnTimeoutGroup] 2023-07-24 04:10:53,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:53,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 04:10:53,990 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:53,990 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,032 DEBUG [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 04:10:54,032 DEBUG [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 04:10:54,032 DEBUG [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 04:10:54,033 WARN [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 04:10:54,032 WARN [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 04:10:54,033 WARN [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 04:10:54,039 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 04:10:54,040 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 04:10:54,041 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:10:54,064 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:54,066 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 04:10:54,069 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info 2023-07-24 04:10:54,070 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 04:10:54,071 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:54,071 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 04:10:54,073 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier 2023-07-24 04:10:54,074 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 04:10:54,075 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:54,075 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 04:10:54,077 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table 2023-07-24 04:10:54,078 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 04:10:54,079 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:54,081 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740 2023-07-24 04:10:54,082 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740 2023-07-24 04:10:54,086 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 04:10:54,089 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 04:10:54,093 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:54,094 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10851386560, jitterRate=0.010614126920700073}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 04:10:54,094 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 04:10:54,094 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 04:10:54,094 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 04:10:54,094 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 04:10:54,094 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 04:10:54,094 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 04:10:54,096 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 04:10:54,096 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 04:10:54,105 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 04:10:54,105 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 04:10:54,115 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 04:10:54,127 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 04:10:54,130 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 04:10:54,134 INFO [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36883,1690171850269 with isa=jenkins-hbase4.apache.org/172.31.14.131:37679, startcode=1690171852273 2023-07-24 04:10:54,134 INFO [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36883,1690171850269 with isa=jenkins-hbase4.apache.org/172.31.14.131:41157, startcode=1690171852333 2023-07-24 04:10:54,134 INFO [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36883,1690171850269 with isa=jenkins-hbase4.apache.org/172.31.14.131:36109, startcode=1690171852137 2023-07-24 04:10:54,139 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36883] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:54,140 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:10:54,141 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 04:10:54,145 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36883] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:54,145 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:10:54,145 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 04:10:54,145 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36883] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:54,146 DEBUG [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:10:54,146 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:10:54,146 DEBUG [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:10:54,146 DEBUG [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:10:54,147 DEBUG [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:10:54,146 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 04:10:54,147 DEBUG [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39353 2023-07-24 04:10:54,147 DEBUG [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39353 2023-07-24 04:10:54,147 DEBUG [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:10:54,147 DEBUG [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:10:54,147 DEBUG [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39353 2023-07-24 04:10:54,156 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:54,157 DEBUG [RS:2;jenkins-hbase4:41157] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:54,157 DEBUG [RS:0;jenkins-hbase4:36109] zookeeper.ZKUtil(162): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:54,157 WARN [RS:2;jenkins-hbase4:41157] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:10:54,157 WARN [RS:0;jenkins-hbase4:36109] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:10:54,158 INFO [RS:2;jenkins-hbase4:41157] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:10:54,157 DEBUG [RS:1;jenkins-hbase4:37679] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:54,158 INFO [RS:0;jenkins-hbase4:36109] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:10:54,158 WARN [RS:1;jenkins-hbase4:37679] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:10:54,158 DEBUG [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:54,159 INFO [RS:1;jenkins-hbase4:37679] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:10:54,159 DEBUG [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:54,159 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36109,1690171852137] 2023-07-24 04:10:54,159 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41157,1690171852333] 2023-07-24 04:10:54,159 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37679,1690171852273] 2023-07-24 04:10:54,159 DEBUG [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:54,171 DEBUG [RS:1;jenkins-hbase4:37679] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:54,171 DEBUG [RS:0;jenkins-hbase4:36109] zookeeper.ZKUtil(162): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:54,171 DEBUG [RS:2;jenkins-hbase4:41157] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:54,172 DEBUG [RS:1;jenkins-hbase4:37679] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:54,172 DEBUG [RS:0;jenkins-hbase4:36109] zookeeper.ZKUtil(162): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:54,172 DEBUG [RS:2;jenkins-hbase4:41157] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:54,173 DEBUG [RS:0;jenkins-hbase4:36109] zookeeper.ZKUtil(162): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:54,173 DEBUG [RS:1;jenkins-hbase4:37679] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:54,173 DEBUG [RS:2;jenkins-hbase4:41157] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:54,185 DEBUG [RS:0;jenkins-hbase4:36109] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:10:54,185 DEBUG [RS:2;jenkins-hbase4:41157] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:10:54,185 DEBUG [RS:1;jenkins-hbase4:37679] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:10:54,196 INFO [RS:0;jenkins-hbase4:36109] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:10:54,196 INFO [RS:2;jenkins-hbase4:41157] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:10:54,196 INFO [RS:1;jenkins-hbase4:37679] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:10:54,219 INFO [RS:2;jenkins-hbase4:41157] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:10:54,219 INFO [RS:0;jenkins-hbase4:36109] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:10:54,219 INFO [RS:1;jenkins-hbase4:37679] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:10:54,224 INFO [RS:0;jenkins-hbase4:36109] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:10:54,224 INFO [RS:1;jenkins-hbase4:37679] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:10:54,225 INFO [RS:0;jenkins-hbase4:36109] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,224 INFO [RS:2;jenkins-hbase4:41157] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:10:54,225 INFO [RS:1;jenkins-hbase4:37679] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,226 INFO [RS:2;jenkins-hbase4:41157] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,226 INFO [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:10:54,226 INFO [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:10:54,227 INFO [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:10:54,236 INFO [RS:1;jenkins-hbase4:37679] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,236 INFO [RS:2;jenkins-hbase4:41157] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,236 INFO [RS:0;jenkins-hbase4:36109] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,236 DEBUG [RS:1;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,236 DEBUG [RS:2;jenkins-hbase4:41157] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,236 DEBUG [RS:1;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,236 DEBUG [RS:2;jenkins-hbase4:41157] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:1;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,236 DEBUG [RS:0;jenkins-hbase4:36109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:1;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:2;jenkins-hbase4:41157] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:1;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:2;jenkins-hbase4:41157] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:1;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:10:54,237 DEBUG [RS:2;jenkins-hbase4:41157] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:0;jenkins-hbase4:36109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:2;jenkins-hbase4:41157] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:10:54,237 DEBUG [RS:1;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:2;jenkins-hbase4:41157] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:0;jenkins-hbase4:36109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:2;jenkins-hbase4:41157] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:1;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,238 DEBUG [RS:2;jenkins-hbase4:41157] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,238 DEBUG [RS:1;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,238 DEBUG [RS:2;jenkins-hbase4:41157] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,238 DEBUG [RS:1;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,237 DEBUG [RS:0;jenkins-hbase4:36109] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,238 DEBUG [RS:0;jenkins-hbase4:36109] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,238 DEBUG [RS:0;jenkins-hbase4:36109] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:10:54,238 DEBUG [RS:0;jenkins-hbase4:36109] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,238 DEBUG [RS:0;jenkins-hbase4:36109] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,239 DEBUG [RS:0;jenkins-hbase4:36109] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,239 DEBUG [RS:0;jenkins-hbase4:36109] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:54,240 INFO [RS:2;jenkins-hbase4:41157] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,240 INFO [RS:2;jenkins-hbase4:41157] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,240 INFO [RS:2;jenkins-hbase4:41157] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,241 INFO [RS:1;jenkins-hbase4:37679] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,241 INFO [RS:1;jenkins-hbase4:37679] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,241 INFO [RS:1;jenkins-hbase4:37679] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,250 INFO [RS:0;jenkins-hbase4:36109] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,251 INFO [RS:0;jenkins-hbase4:36109] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,251 INFO [RS:0;jenkins-hbase4:36109] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,260 INFO [RS:2;jenkins-hbase4:41157] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:10:54,262 INFO [RS:2;jenkins-hbase4:41157] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41157,1690171852333-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,267 INFO [RS:0;jenkins-hbase4:36109] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:10:54,267 INFO [RS:1;jenkins-hbase4:37679] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:10:54,268 INFO [RS:0;jenkins-hbase4:36109] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36109,1690171852137-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,268 INFO [RS:1;jenkins-hbase4:37679] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37679,1690171852273-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,282 DEBUG [jenkins-hbase4:36883] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 04:10:54,283 INFO [RS:0;jenkins-hbase4:36109] regionserver.Replication(203): jenkins-hbase4.apache.org,36109,1690171852137 started 2023-07-24 04:10:54,283 INFO [RS:2;jenkins-hbase4:41157] regionserver.Replication(203): jenkins-hbase4.apache.org,41157,1690171852333 started 2023-07-24 04:10:54,283 INFO [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36109,1690171852137, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36109, sessionid=0x10195863d980001 2023-07-24 04:10:54,283 INFO [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41157,1690171852333, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41157, sessionid=0x10195863d980003 2023-07-24 04:10:54,283 DEBUG [RS:0;jenkins-hbase4:36109] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:10:54,283 DEBUG [RS:2;jenkins-hbase4:41157] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:10:54,283 DEBUG [RS:0;jenkins-hbase4:36109] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:54,283 DEBUG [RS:2;jenkins-hbase4:41157] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:54,284 DEBUG [RS:0;jenkins-hbase4:36109] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36109,1690171852137' 2023-07-24 04:10:54,284 DEBUG [RS:2;jenkins-hbase4:41157] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41157,1690171852333' 2023-07-24 04:10:54,285 DEBUG [RS:0;jenkins-hbase4:36109] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:10:54,285 DEBUG [RS:2;jenkins-hbase4:41157] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:10:54,285 DEBUG [RS:2;jenkins-hbase4:41157] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:10:54,285 INFO [RS:1;jenkins-hbase4:37679] regionserver.Replication(203): jenkins-hbase4.apache.org,37679,1690171852273 started 2023-07-24 04:10:54,285 DEBUG [RS:0;jenkins-hbase4:36109] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:10:54,286 INFO [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37679,1690171852273, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37679, sessionid=0x10195863d980002 2023-07-24 04:10:54,286 DEBUG [RS:1;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:10:54,286 DEBUG [RS:1;jenkins-hbase4:37679] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:54,286 DEBUG [RS:1;jenkins-hbase4:37679] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37679,1690171852273' 2023-07-24 04:10:54,286 DEBUG [RS:1;jenkins-hbase4:37679] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:10:54,287 DEBUG [RS:0;jenkins-hbase4:36109] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:10:54,287 DEBUG [RS:2;jenkins-hbase4:41157] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:10:54,287 DEBUG [RS:1;jenkins-hbase4:37679] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:10:54,287 DEBUG [RS:2;jenkins-hbase4:41157] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:10:54,287 DEBUG [RS:0;jenkins-hbase4:36109] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:10:54,287 DEBUG [RS:2;jenkins-hbase4:41157] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:54,287 DEBUG [RS:0;jenkins-hbase4:36109] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:54,288 DEBUG [RS:0;jenkins-hbase4:36109] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36109,1690171852137' 2023-07-24 04:10:54,288 DEBUG [RS:1;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:10:54,288 DEBUG [RS:2;jenkins-hbase4:41157] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41157,1690171852333' 2023-07-24 04:10:54,288 DEBUG [RS:1;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:10:54,288 DEBUG [RS:1;jenkins-hbase4:37679] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:54,288 DEBUG [RS:1;jenkins-hbase4:37679] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37679,1690171852273' 2023-07-24 04:10:54,288 DEBUG [RS:1;jenkins-hbase4:37679] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:10:54,288 DEBUG [RS:0;jenkins-hbase4:36109] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:10:54,288 DEBUG [RS:2;jenkins-hbase4:41157] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:10:54,289 DEBUG [RS:1;jenkins-hbase4:37679] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:10:54,289 DEBUG [RS:0;jenkins-hbase4:36109] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:10:54,289 DEBUG [RS:2;jenkins-hbase4:41157] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:10:54,290 DEBUG [RS:1;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:10:54,290 DEBUG [RS:2;jenkins-hbase4:41157] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:10:54,290 DEBUG [RS:0;jenkins-hbase4:36109] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:10:54,290 INFO [RS:1;jenkins-hbase4:37679] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 04:10:54,290 INFO [RS:0;jenkins-hbase4:36109] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 04:10:54,290 INFO [RS:2;jenkins-hbase4:41157] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 04:10:54,291 INFO [RS:0;jenkins-hbase4:36109] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 04:10:54,290 INFO [RS:1;jenkins-hbase4:37679] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 04:10:54,291 INFO [RS:2;jenkins-hbase4:41157] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 04:10:54,302 DEBUG [jenkins-hbase4:36883] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:10:54,303 DEBUG [jenkins-hbase4:36883] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:10:54,303 DEBUG [jenkins-hbase4:36883] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:10:54,303 DEBUG [jenkins-hbase4:36883] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:10:54,303 DEBUG [jenkins-hbase4:36883] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:10:54,307 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37679,1690171852273, state=OPENING 2023-07-24 04:10:54,316 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 04:10:54,318 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:10:54,319 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 04:10:54,324 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:10:54,406 INFO [RS:1;jenkins-hbase4:37679] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37679%2C1690171852273, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,37679,1690171852273, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:10:54,406 INFO [RS:2;jenkins-hbase4:41157] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41157%2C1690171852333, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,41157,1690171852333, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:10:54,408 INFO [RS:0;jenkins-hbase4:36109] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36109%2C1690171852137, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,36109,1690171852137, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:10:54,447 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:10:54,451 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:10:54,452 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:10:54,453 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:10:54,454 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:10:54,454 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:10:54,454 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:10:54,467 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:10:54,469 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:10:54,476 INFO [RS:2;jenkins-hbase4:41157] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,41157,1690171852333/jenkins-hbase4.apache.org%2C41157%2C1690171852333.1690171854413 2023-07-24 04:10:54,476 INFO [RS:1;jenkins-hbase4:37679] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,37679,1690171852273/jenkins-hbase4.apache.org%2C37679%2C1690171852273.1690171854413 2023-07-24 04:10:54,478 DEBUG [RS:2;jenkins-hbase4:41157] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK]] 2023-07-24 04:10:54,478 INFO [RS:0;jenkins-hbase4:36109] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,36109,1690171852137/jenkins-hbase4.apache.org%2C36109%2C1690171852137.1690171854413 2023-07-24 04:10:54,478 DEBUG [RS:0;jenkins-hbase4:36109] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK], DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK]] 2023-07-24 04:10:54,478 DEBUG [RS:1;jenkins-hbase4:37679] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK], DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK]] 2023-07-24 04:10:54,515 WARN [ReadOnlyZKClient-127.0.0.1:59235@0x271197f5] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 04:10:54,519 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:54,522 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:10:54,526 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38416, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:10:54,543 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 04:10:54,543 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:10:54,544 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36883,1690171850269] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:10:54,549 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38430, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:10:54,550 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37679] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:38430 deadline: 1690171914550, exception=org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region hbase:meta,,1 is opening on jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:54,550 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37679%2C1690171852273.meta, suffix=.meta, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,37679,1690171852273, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:10:54,569 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:10:54,569 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:10:54,573 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:10:54,579 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,37679,1690171852273/jenkins-hbase4.apache.org%2C37679%2C1690171852273.meta.1690171854552.meta 2023-07-24 04:10:54,580 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK]] 2023-07-24 04:10:54,580 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:10:54,581 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 04:10:54,584 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 04:10:54,586 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 04:10:54,591 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 04:10:54,591 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:54,591 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 04:10:54,591 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 04:10:54,594 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 04:10:54,595 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info 2023-07-24 04:10:54,595 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info 2023-07-24 04:10:54,596 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 04:10:54,597 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:54,597 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 04:10:54,598 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier 2023-07-24 04:10:54,599 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier 2023-07-24 04:10:54,599 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 04:10:54,600 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:54,600 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 04:10:54,601 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table 2023-07-24 04:10:54,601 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table 2023-07-24 04:10:54,601 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 04:10:54,602 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:54,604 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740 2023-07-24 04:10:54,609 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740 2023-07-24 04:10:54,612 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 04:10:54,614 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 04:10:54,615 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10392512800, jitterRate=-0.032121822237968445}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 04:10:54,616 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 04:10:54,627 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690171854511 2023-07-24 04:10:54,645 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 04:10:54,646 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 04:10:54,647 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37679,1690171852273, state=OPEN 2023-07-24 04:10:54,652 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 04:10:54,652 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 04:10:54,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 04:10:54,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37679,1690171852273 in 328 msec 2023-07-24 04:10:54,665 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 04:10:54,665 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 545 msec 2023-07-24 04:10:54,670 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 973 msec 2023-07-24 04:10:54,671 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690171854671, completionTime=-1 2023-07-24 04:10:54,671 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 04:10:54,671 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 04:10:54,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 04:10:54,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690171914728 2023-07-24 04:10:54,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690171974729 2023-07-24 04:10:54,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 57 msec 2023-07-24 04:10:54,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36883,1690171850269-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36883,1690171850269-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36883,1690171850269-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36883, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,750 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:54,756 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 04:10:54,769 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 04:10:54,771 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 04:10:54,781 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 04:10:54,783 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 04:10:54,786 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 04:10:54,803 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:10:54,806 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4 empty. 2023-07-24 04:10:54,810 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:10:54,810 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 04:10:54,865 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 04:10:54,868 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 73e1052e9bc949a33667944e6caa42b4, NAME => 'hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:54,903 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:54,903 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 73e1052e9bc949a33667944e6caa42b4, disabling compactions & flushes 2023-07-24 04:10:54,904 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:10:54,904 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:10:54,904 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. after waiting 0 ms 2023-07-24 04:10:54,904 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:10:54,904 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:10:54,904 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 73e1052e9bc949a33667944e6caa42b4: 2023-07-24 04:10:54,911 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 04:10:54,930 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690171854914"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171854914"}]},"ts":"1690171854914"} 2023-07-24 04:10:54,962 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 04:10:54,964 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 04:10:54,970 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171854965"}]},"ts":"1690171854965"} 2023-07-24 04:10:54,973 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 04:10:54,979 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:10:54,979 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:10:54,979 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:10:54,979 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:10:54,979 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:10:54,981 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, ASSIGN}] 2023-07-24 04:10:54,984 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, ASSIGN 2023-07-24 04:10:54,985 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41157,1690171852333; forceNewPlan=false, retain=false 2023-07-24 04:10:55,069 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36883,1690171850269] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 04:10:55,073 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36883,1690171850269] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 04:10:55,076 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 04:10:55,078 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 04:10:55,082 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:10:55,083 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f empty. 2023-07-24 04:10:55,084 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:10:55,084 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 04:10:55,110 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 04:10:55,111 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6aa1ab126d58dcf7d835257119c9304f, NAME => 'hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:55,140 INFO [jenkins-hbase4:36883] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 04:10:55,142 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=73e1052e9bc949a33667944e6caa42b4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:55,143 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690171855142"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171855142"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171855142"}]},"ts":"1690171855142"} 2023-07-24 04:10:55,149 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:55,149 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 6aa1ab126d58dcf7d835257119c9304f, disabling compactions & flushes 2023-07-24 04:10:55,149 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:10:55,149 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:10:55,149 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. after waiting 0 ms 2023-07-24 04:10:55,150 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:10:55,150 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:10:55,150 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 6aa1ab126d58dcf7d835257119c9304f: 2023-07-24 04:10:55,150 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 73e1052e9bc949a33667944e6caa42b4, server=jenkins-hbase4.apache.org,41157,1690171852333}] 2023-07-24 04:10:55,156 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 04:10:55,158 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690171855158"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171855158"}]},"ts":"1690171855158"} 2023-07-24 04:10:55,166 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 04:10:55,167 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 04:10:55,168 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171855167"}]},"ts":"1690171855167"} 2023-07-24 04:10:55,171 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 04:10:55,176 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:10:55,176 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:10:55,176 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:10:55,176 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:10:55,176 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:10:55,176 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=6aa1ab126d58dcf7d835257119c9304f, ASSIGN}] 2023-07-24 04:10:55,179 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=6aa1ab126d58dcf7d835257119c9304f, ASSIGN 2023-07-24 04:10:55,181 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=6aa1ab126d58dcf7d835257119c9304f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41157,1690171852333; forceNewPlan=false, retain=false 2023-07-24 04:10:55,308 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:55,308 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:10:55,313 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52318, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:10:55,319 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:10:55,319 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 73e1052e9bc949a33667944e6caa42b4, NAME => 'hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:10:55,320 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:10:55,320 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:55,320 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:10:55,320 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:10:55,323 INFO [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:10:55,325 DEBUG [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info 2023-07-24 04:10:55,325 DEBUG [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info 2023-07-24 04:10:55,326 INFO [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 73e1052e9bc949a33667944e6caa42b4 columnFamilyName info 2023-07-24 04:10:55,327 INFO [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] regionserver.HStore(310): Store=73e1052e9bc949a33667944e6caa42b4/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:55,328 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:10:55,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:10:55,331 INFO [jenkins-hbase4:36883] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 04:10:55,332 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=6aa1ab126d58dcf7d835257119c9304f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:55,333 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690171855332"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171855332"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171855332"}]},"ts":"1690171855332"} 2023-07-24 04:10:55,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:10:55,336 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure 6aa1ab126d58dcf7d835257119c9304f, server=jenkins-hbase4.apache.org,41157,1690171852333}] 2023-07-24 04:10:55,338 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:55,339 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 73e1052e9bc949a33667944e6caa42b4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9459097760, jitterRate=-0.11905287206172943}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:55,339 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 73e1052e9bc949a33667944e6caa42b4: 2023-07-24 04:10:55,341 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4., pid=7, masterSystemTime=1690171855308 2023-07-24 04:10:55,345 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:10:55,345 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:10:55,346 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=73e1052e9bc949a33667944e6caa42b4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:55,346 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690171855346"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171855346"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171855346"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171855346"}]},"ts":"1690171855346"} 2023-07-24 04:10:55,355 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-24 04:10:55,355 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 73e1052e9bc949a33667944e6caa42b4, server=jenkins-hbase4.apache.org,41157,1690171852333 in 201 msec 2023-07-24 04:10:55,359 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-24 04:10:55,360 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, ASSIGN in 374 msec 2023-07-24 04:10:55,361 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 04:10:55,361 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171855361"}]},"ts":"1690171855361"} 2023-07-24 04:10:55,364 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 04:10:55,367 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 04:10:55,370 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 595 msec 2023-07-24 04:10:55,384 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 04:10:55,386 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 04:10:55,386 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:10:55,410 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:10:55,411 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52334, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:10:55,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 04:10:55,448 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 04:10:55,455 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 37 msec 2023-07-24 04:10:55,464 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 04:10:55,468 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-24 04:10:55,468 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 04:10:55,494 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:10:55,494 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6aa1ab126d58dcf7d835257119c9304f, NAME => 'hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:10:55,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 04:10:55,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. service=MultiRowMutationService 2023-07-24 04:10:55,495 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 04:10:55,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:10:55,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:55,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:10:55,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:10:55,498 INFO [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:10:55,500 DEBUG [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m 2023-07-24 04:10:55,500 DEBUG [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m 2023-07-24 04:10:55,501 INFO [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6aa1ab126d58dcf7d835257119c9304f columnFamilyName m 2023-07-24 04:10:55,502 INFO [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] regionserver.HStore(310): Store=6aa1ab126d58dcf7d835257119c9304f/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:55,503 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:10:55,504 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:10:55,508 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:10:55,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:55,512 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6aa1ab126d58dcf7d835257119c9304f; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5c8db32f, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:55,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6aa1ab126d58dcf7d835257119c9304f: 2023-07-24 04:10:55,513 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f., pid=9, masterSystemTime=1690171855489 2023-07-24 04:10:55,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:10:55,516 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:10:55,516 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=6aa1ab126d58dcf7d835257119c9304f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:55,517 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690171855516"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171855516"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171855516"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171855516"}]},"ts":"1690171855516"} 2023-07-24 04:10:55,522 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-24 04:10:55,523 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure 6aa1ab126d58dcf7d835257119c9304f, server=jenkins-hbase4.apache.org,41157,1690171852333 in 183 msec 2023-07-24 04:10:55,527 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-24 04:10:55,528 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=6aa1ab126d58dcf7d835257119c9304f, ASSIGN in 347 msec 2023-07-24 04:10:55,537 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 04:10:55,545 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 79 msec 2023-07-24 04:10:55,546 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 04:10:55,547 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171855546"}]},"ts":"1690171855546"} 2023-07-24 04:10:55,549 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 04:10:55,552 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 04:10:55,555 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 482 msec 2023-07-24 04:10:55,560 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 04:10:55,563 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 04:10:55,564 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.115sec 2023-07-24 04:10:55,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 04:10:55,568 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 04:10:55,568 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 04:10:55,570 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36883,1690171850269-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 04:10:55,570 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36883,1690171850269-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 04:10:55,579 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 04:10:55,579 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 04:10:55,582 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 04:10:55,597 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(139): Connect 0x246ea770 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:10:55,602 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51dbf181, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:10:55,617 DEBUG [hconnection-0x18426628-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:10:55,629 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38444, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:10:55,641 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36883,1690171850269 2023-07-24 04:10:55,642 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:10:55,648 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:10:55,648 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:55,652 DEBUG [Listener at localhost/41307] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 04:10:55,652 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 04:10:55,655 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38870, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 04:10:55,659 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 04:10:55,668 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 04:10:55,668 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:10:55,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 04:10:55,674 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(139): Connect 0x2ef08007 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:10:55,679 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@377895d3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:10:55,679 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:10:55,683 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:10:55,684 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10195863d98000a connected 2023-07-24 04:10:55,720 INFO [Listener at localhost/41307] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=422, OpenFileDescriptor=678, MaxFileDescriptor=60000, SystemLoadAverage=558, ProcessCount=176, AvailableMemoryMB=6302 2023-07-24 04:10:55,723 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(132): testClearNotProcessedDeadServer 2023-07-24 04:10:55,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:55,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:55,797 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 04:10:55,815 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:10:55,815 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:55,815 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:55,816 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:10:55,816 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:55,816 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:10:55,816 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:10:55,821 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39717 2023-07-24 04:10:55,821 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:10:55,822 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:10:55,824 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:55,828 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:55,832 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39717 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:10:55,840 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:397170x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:10:55,842 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39717-0x10195863d98000b connected 2023-07-24 04:10:55,842 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 04:10:55,843 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 04:10:55,844 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:10:55,847 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39717 2023-07-24 04:10:55,847 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39717 2023-07-24 04:10:55,847 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39717 2023-07-24 04:10:55,848 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39717 2023-07-24 04:10:55,848 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39717 2023-07-24 04:10:55,850 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:10:55,850 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:10:55,850 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:10:55,851 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:10:55,851 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:10:55,851 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:10:55,851 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:10:55,852 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 34055 2023-07-24 04:10:55,852 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:10:55,855 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:55,856 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2b0c8fbc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:10:55,856 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:55,856 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f9c119f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:10:55,867 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:10:55,868 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:10:55,868 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:10:55,869 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 04:10:55,870 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:55,871 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@716625e4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:10:55,873 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@2be36392{HTTP/1.1, (http/1.1)}{0.0.0.0:34055} 2023-07-24 04:10:55,873 INFO [Listener at localhost/41307] server.Server(415): Started @12172ms 2023-07-24 04:10:55,876 INFO [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:10:55,876 DEBUG [RS:3;jenkins-hbase4:39717] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:10:55,878 DEBUG [RS:3;jenkins-hbase4:39717] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:10:55,878 DEBUG [RS:3;jenkins-hbase4:39717] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:10:55,881 DEBUG [RS:3;jenkins-hbase4:39717] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:10:55,882 DEBUG [RS:3;jenkins-hbase4:39717] zookeeper.ReadOnlyZKClient(139): Connect 0x694ffec5 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:10:55,888 DEBUG [RS:3;jenkins-hbase4:39717] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67881d9c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:10:55,889 DEBUG [RS:3;jenkins-hbase4:39717] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@63b795b5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:10:55,901 DEBUG [RS:3;jenkins-hbase4:39717] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:39717 2023-07-24 04:10:55,901 INFO [RS:3;jenkins-hbase4:39717] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:10:55,901 INFO [RS:3;jenkins-hbase4:39717] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:10:55,902 DEBUG [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:10:55,902 INFO [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36883,1690171850269 with isa=jenkins-hbase4.apache.org/172.31.14.131:39717, startcode=1690171855814 2023-07-24 04:10:55,903 DEBUG [RS:3;jenkins-hbase4:39717] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:10:55,908 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47605, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:10:55,909 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36883] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:55,909 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:10:55,909 DEBUG [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:10:55,909 DEBUG [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:10:55,909 DEBUG [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39353 2023-07-24 04:10:55,914 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:55,914 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:55,914 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:55,914 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:55,915 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:55,915 DEBUG [RS:3;jenkins-hbase4:39717] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:55,915 WARN [RS:3;jenkins-hbase4:39717] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:10:55,915 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39717,1690171855814] 2023-07-24 04:10:55,915 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:55,916 INFO [RS:3;jenkins-hbase4:39717] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:10:55,916 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:55,916 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:55,916 DEBUG [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:55,916 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:55,916 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:55,916 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 04:10:55,917 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:55,917 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:55,917 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:55,917 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:55,927 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:55,927 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 04:10:55,928 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:55,928 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:55,933 DEBUG [RS:3;jenkins-hbase4:39717] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:55,933 DEBUG [RS:3;jenkins-hbase4:39717] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:55,934 DEBUG [RS:3;jenkins-hbase4:39717] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:55,934 DEBUG [RS:3;jenkins-hbase4:39717] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:55,936 DEBUG [RS:3;jenkins-hbase4:39717] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:10:55,936 INFO [RS:3;jenkins-hbase4:39717] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:10:55,939 INFO [RS:3;jenkins-hbase4:39717] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:10:55,939 INFO [RS:3;jenkins-hbase4:39717] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:10:55,939 INFO [RS:3;jenkins-hbase4:39717] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:55,942 INFO [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:10:55,945 INFO [RS:3;jenkins-hbase4:39717] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:55,945 DEBUG [RS:3;jenkins-hbase4:39717] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:55,945 DEBUG [RS:3;jenkins-hbase4:39717] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:55,945 DEBUG [RS:3;jenkins-hbase4:39717] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:55,945 DEBUG [RS:3;jenkins-hbase4:39717] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:55,945 DEBUG [RS:3;jenkins-hbase4:39717] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:55,945 DEBUG [RS:3;jenkins-hbase4:39717] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:10:55,945 DEBUG [RS:3;jenkins-hbase4:39717] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:55,946 DEBUG [RS:3;jenkins-hbase4:39717] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:55,946 DEBUG [RS:3;jenkins-hbase4:39717] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:55,946 DEBUG [RS:3;jenkins-hbase4:39717] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:55,948 INFO [RS:3;jenkins-hbase4:39717] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:55,948 INFO [RS:3;jenkins-hbase4:39717] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:55,949 INFO [RS:3;jenkins-hbase4:39717] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:55,960 INFO [RS:3;jenkins-hbase4:39717] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:10:55,960 INFO [RS:3;jenkins-hbase4:39717] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39717,1690171855814-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:55,972 INFO [RS:3;jenkins-hbase4:39717] regionserver.Replication(203): jenkins-hbase4.apache.org,39717,1690171855814 started 2023-07-24 04:10:55,972 INFO [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39717,1690171855814, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39717, sessionid=0x10195863d98000b 2023-07-24 04:10:55,972 DEBUG [RS:3;jenkins-hbase4:39717] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:10:55,972 DEBUG [RS:3;jenkins-hbase4:39717] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:55,972 DEBUG [RS:3;jenkins-hbase4:39717] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39717,1690171855814' 2023-07-24 04:10:55,972 DEBUG [RS:3;jenkins-hbase4:39717] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:10:55,973 DEBUG [RS:3;jenkins-hbase4:39717] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:10:55,974 DEBUG [RS:3;jenkins-hbase4:39717] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:10:55,974 DEBUG [RS:3;jenkins-hbase4:39717] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:10:55,974 DEBUG [RS:3;jenkins-hbase4:39717] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:55,974 DEBUG [RS:3;jenkins-hbase4:39717] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39717,1690171855814' 2023-07-24 04:10:55,974 DEBUG [RS:3;jenkins-hbase4:39717] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:10:55,975 DEBUG [RS:3;jenkins-hbase4:39717] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:10:55,975 DEBUG [RS:3;jenkins-hbase4:39717] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:10:55,975 INFO [RS:3;jenkins-hbase4:39717] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 04:10:55,975 INFO [RS:3;jenkins-hbase4:39717] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 04:10:55,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:10:55,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:55,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:10:55,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:10:55,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:10:55,995 DEBUG [hconnection-0xe88b18-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:10:55,999 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38454, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:10:56,004 DEBUG [hconnection-0xe88b18-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:10:56,009 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:10:56,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:56,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:56,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:10:56,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:10:56,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:38870 deadline: 1690173056022, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:10:56,024 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:10:56,026 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:10:56,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:56,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:56,028 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36109, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:10:56,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:10:56,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:10:56,035 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBasics(260): testClearNotProcessedDeadServer 2023-07-24 04:10:56,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:10:56,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:10:56,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup deadServerGroup 2023-07-24 04:10:56,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:56,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:10:56,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-24 04:10:56,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:10:56,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:10:56,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:56,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:56,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36109] to rsgroup deadServerGroup 2023-07-24 04:10:56,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:56,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:10:56,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-24 04:10:56,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:10:56,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 04:10:56,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36109,1690171852137] are moved back to default 2023-07-24 04:10:56,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(438): Move servers done: default => deadServerGroup 2023-07-24 04:10:56,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:10:56,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:56,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:56,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-24 04:10:56,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:10:56,078 INFO [RS:3;jenkins-hbase4:39717] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39717%2C1690171855814, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,39717,1690171855814, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:10:56,080 DEBUG [Listener at localhost/41307] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:10:56,086 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36568, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:10:56,087 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36109] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36109,1690171852137' ***** 2023-07-24 04:10:56,087 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36109] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x18426628 2023-07-24 04:10:56,087 INFO [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:10:56,094 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:10:56,099 INFO [RS:0;jenkins-hbase4:36109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2ff5d8c6{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:10:56,113 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:10:56,113 INFO [RS:0;jenkins-hbase4:36109] server.AbstractConnector(383): Stopped ServerConnector@516b648c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:10:56,115 INFO [RS:0;jenkins-hbase4:36109] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:10:56,115 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:10:56,146 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:10:56,147 INFO [RS:0;jenkins-hbase4:36109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4d96518f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:10:56,148 INFO [RS:0;jenkins-hbase4:36109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6b81b873{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:10:56,150 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:10:56,155 INFO [RS:0;jenkins-hbase4:36109] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:10:56,155 INFO [RS:3;jenkins-hbase4:39717] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,39717,1690171855814/jenkins-hbase4.apache.org%2C39717%2C1690171855814.1690171856079 2023-07-24 04:10:56,155 INFO [RS:0;jenkins-hbase4:36109] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:10:56,155 INFO [RS:0;jenkins-hbase4:36109] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:10:56,155 INFO [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:56,155 DEBUG [RS:0;jenkins-hbase4:36109] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x244fd597 to 127.0.0.1:59235 2023-07-24 04:10:56,155 DEBUG [RS:0;jenkins-hbase4:36109] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:10:56,156 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:10:56,156 INFO [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36109,1690171852137; all regions closed. 2023-07-24 04:10:56,157 DEBUG [RS:3;jenkins-hbase4:39717] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK]] 2023-07-24 04:10:56,181 DEBUG [RS:0;jenkins-hbase4:36109] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:10:56,181 INFO [RS:0;jenkins-hbase4:36109] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36109%2C1690171852137:(num 1690171854413) 2023-07-24 04:10:56,181 DEBUG [RS:0;jenkins-hbase4:36109] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:10:56,182 INFO [RS:0;jenkins-hbase4:36109] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:10:56,182 INFO [RS:0;jenkins-hbase4:36109] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 04:10:56,182 INFO [RS:0;jenkins-hbase4:36109] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:10:56,182 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:10:56,182 INFO [RS:0;jenkins-hbase4:36109] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:10:56,183 INFO [RS:0;jenkins-hbase4:36109] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:10:56,184 INFO [RS:0;jenkins-hbase4:36109] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36109 2023-07-24 04:10:56,193 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:56,193 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:56,193 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:56,193 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:56,193 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:56,193 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:56,193 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 2023-07-24 04:10:56,193 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:56,193 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:56,194 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36109,1690171852137] 2023-07-24 04:10:56,194 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:56,195 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36109,1690171852137; numProcessing=1 2023-07-24 04:10:56,195 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:56,195 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:56,196 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:56,196 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:56,196 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:56,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-24 04:10:56,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:10:56,197 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:56,197 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36109,1690171852137 already deleted, retry=false 2023-07-24 04:10:56,198 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,36109,1690171852137 on jenkins-hbase4.apache.org,36883,1690171850269 2023-07-24 04:10:56,198 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 znode expired, triggering replicatorRemoved event 2023-07-24 04:10:56,207 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,36109,1690171852137, splitWal=true, meta=false 2023-07-24 04:10:56,208 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=12 for jenkins-hbase4.apache.org,36109,1690171852137 (carryingMeta=false) jenkins-hbase4.apache.org,36109,1690171852137/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@5b6c3fb5[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 04:10:56,208 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:10:56,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:56,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:56,215 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=12, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,36109,1690171852137, splitWal=true, meta=false 2023-07-24 04:10:56,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:10:56,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:10:56,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:10:56,218 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,36109,1690171852137 had 0 regions 2023-07-24 04:10:56,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:10:56,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:10:56,220 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=12, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,36109,1690171852137, splitWal=true, meta=false, isMeta: false 2023-07-24 04:10:56,222 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,36109,1690171852137-splitting 2023-07-24 04:10:56,224 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,36109,1690171852137-splitting dir is empty, no logs to split. 2023-07-24 04:10:56,224 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,36109,1690171852137 WAL count=0, meta=false 2023-07-24 04:10:56,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:10:56,232 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,36109,1690171852137-splitting dir is empty, no logs to split. 2023-07-24 04:10:56,232 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,36109,1690171852137 WAL count=0, meta=false 2023-07-24 04:10:56,232 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,36109,1690171852137 WAL splitting is done? wals=0, meta=false 2023-07-24 04:10:56,238 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,36109,1690171852137 failed, ignore...File hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,36109,1690171852137-splitting does not exist. 2023-07-24 04:10:56,242 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,36109,1690171852137 after splitting done 2023-07-24 04:10:56,242 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase4.apache.org,36109,1690171852137 from processing; numProcessing=0 2023-07-24 04:10:56,246 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,36109,1690171852137, splitWal=true, meta=false in 42 msec 2023-07-24 04:10:56,298 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:10:56,298 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x10195863d980001, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:10:56,298 INFO [RS:0;jenkins-hbase4:36109] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36109,1690171852137; zookeeper connection closed. 2023-07-24 04:10:56,299 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:56,299 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 znode expired, triggering replicatorRemoved event 2023-07-24 04:10:56,299 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:56,299 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:56,300 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,36109,1690171852137 znode expired, triggering replicatorRemoved event 2023-07-24 04:10:56,300 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:10:56,301 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-24 04:10:56,301 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:10:56,307 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1567a550] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1567a550 2023-07-24 04:10:56,308 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:56,308 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 04:10:56,309 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:56,309 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:56,309 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:56,310 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:56,310 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:56,310 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:56,310 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:56,311 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:56,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:56,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-24 04:10:56,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 04:10:56,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:10:56,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:10:56,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:10:56,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:10:56,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36109] to rsgroup default 2023-07-24 04:10:56,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(258): Dropping jenkins-hbase4.apache.org:36109 during move-to-default rsgroup because not online 2023-07-24 04:10:56,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:56,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-24 04:10:56,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:10:56,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group deadServerGroup, current retry=0 2023-07-24 04:10:56,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(261): All regions from [] are moved back to deadServerGroup 2023-07-24 04:10:56,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(438): Move servers done: deadServerGroup => default 2023-07-24 04:10:56,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:10:56,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup deadServerGroup 2023-07-24 04:10:56,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:56,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:10:56,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:10:56,359 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 04:10:56,375 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:10:56,375 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:56,376 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:56,376 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:10:56,376 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:10:56,376 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:10:56,376 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:10:56,379 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43785 2023-07-24 04:10:56,379 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:10:56,380 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:10:56,381 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:56,383 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:10:56,384 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43785 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:10:56,387 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:437850x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:10:56,389 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(162): regionserver:437850x0, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 04:10:56,389 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43785-0x10195863d98000d connected 2023-07-24 04:10:56,390 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(162): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 04:10:56,392 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:10:56,392 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43785 2023-07-24 04:10:56,398 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43785 2023-07-24 04:10:56,399 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43785 2023-07-24 04:10:56,401 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43785 2023-07-24 04:10:56,402 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43785 2023-07-24 04:10:56,404 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:10:56,404 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:10:56,404 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:10:56,405 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:10:56,405 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:10:56,405 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:10:56,405 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:10:56,406 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 34231 2023-07-24 04:10:56,406 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:10:56,411 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:56,411 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@71907284{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:10:56,412 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:56,412 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@60676a38{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:10:56,423 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:10:56,424 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:10:56,424 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:10:56,424 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 04:10:56,426 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:10:56,427 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@49cb92f6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:10:56,429 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@6b49ad69{HTTP/1.1, (http/1.1)}{0.0.0.0:34231} 2023-07-24 04:10:56,429 INFO [Listener at localhost/41307] server.Server(415): Started @12728ms 2023-07-24 04:10:56,436 INFO [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:10:56,437 DEBUG [RS:4;jenkins-hbase4:43785] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:10:56,440 DEBUG [RS:4;jenkins-hbase4:43785] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:10:56,440 DEBUG [RS:4;jenkins-hbase4:43785] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:10:56,451 DEBUG [RS:4;jenkins-hbase4:43785] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:10:56,453 DEBUG [RS:4;jenkins-hbase4:43785] zookeeper.ReadOnlyZKClient(139): Connect 0x5f30c910 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:10:56,460 DEBUG [RS:4;jenkins-hbase4:43785] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51f73c86, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:10:56,460 DEBUG [RS:4;jenkins-hbase4:43785] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55c8a260, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:10:56,471 DEBUG [RS:4;jenkins-hbase4:43785] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase4:43785 2023-07-24 04:10:56,471 INFO [RS:4;jenkins-hbase4:43785] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:10:56,471 INFO [RS:4;jenkins-hbase4:43785] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:10:56,471 DEBUG [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:10:56,472 INFO [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36883,1690171850269 with isa=jenkins-hbase4.apache.org/172.31.14.131:43785, startcode=1690171856375 2023-07-24 04:10:56,472 DEBUG [RS:4;jenkins-hbase4:43785] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:10:56,475 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46887, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:10:56,476 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36883] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:56,476 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:10:56,476 DEBUG [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:10:56,476 DEBUG [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:10:56,477 DEBUG [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39353 2023-07-24 04:10:56,479 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:56,479 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:56,480 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:56,480 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:10:56,480 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:56,481 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:56,481 DEBUG [RS:4;jenkins-hbase4:43785] zookeeper.ZKUtil(162): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:56,481 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:56,481 WARN [RS:4;jenkins-hbase4:43785] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:10:56,481 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 04:10:56,481 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:56,481 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43785,1690171856375] 2023-07-24 04:10:56,481 INFO [RS:4;jenkins-hbase4:43785] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:10:56,481 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:56,481 DEBUG [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:56,481 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:56,483 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:56,483 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:56,484 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36883,1690171850269] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 04:10:56,484 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:56,485 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:56,485 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:56,486 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:56,487 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:56,489 DEBUG [RS:4;jenkins-hbase4:43785] zookeeper.ZKUtil(162): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:56,490 DEBUG [RS:4;jenkins-hbase4:43785] zookeeper.ZKUtil(162): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:56,490 DEBUG [RS:4;jenkins-hbase4:43785] zookeeper.ZKUtil(162): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:56,491 DEBUG [RS:4;jenkins-hbase4:43785] zookeeper.ZKUtil(162): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:56,491 DEBUG [RS:4;jenkins-hbase4:43785] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:10:56,492 INFO [RS:4;jenkins-hbase4:43785] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:10:56,495 INFO [RS:4;jenkins-hbase4:43785] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:10:56,500 INFO [RS:4;jenkins-hbase4:43785] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:10:56,500 INFO [RS:4;jenkins-hbase4:43785] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:56,502 INFO [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:10:56,506 INFO [RS:4;jenkins-hbase4:43785] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:56,506 DEBUG [RS:4;jenkins-hbase4:43785] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:56,506 DEBUG [RS:4;jenkins-hbase4:43785] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:56,506 DEBUG [RS:4;jenkins-hbase4:43785] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:56,506 DEBUG [RS:4;jenkins-hbase4:43785] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:56,506 DEBUG [RS:4;jenkins-hbase4:43785] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:56,506 DEBUG [RS:4;jenkins-hbase4:43785] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:10:56,507 DEBUG [RS:4;jenkins-hbase4:43785] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:56,507 DEBUG [RS:4;jenkins-hbase4:43785] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:56,507 DEBUG [RS:4;jenkins-hbase4:43785] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:56,507 DEBUG [RS:4;jenkins-hbase4:43785] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:10:56,512 INFO [RS:4;jenkins-hbase4:43785] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:56,512 INFO [RS:4;jenkins-hbase4:43785] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:56,512 INFO [RS:4;jenkins-hbase4:43785] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:56,525 INFO [RS:4;jenkins-hbase4:43785] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:10:56,525 INFO [RS:4;jenkins-hbase4:43785] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43785,1690171856375-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:10:56,538 INFO [RS:4;jenkins-hbase4:43785] regionserver.Replication(203): jenkins-hbase4.apache.org,43785,1690171856375 started 2023-07-24 04:10:56,538 INFO [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43785,1690171856375, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43785, sessionid=0x10195863d98000d 2023-07-24 04:10:56,538 DEBUG [RS:4;jenkins-hbase4:43785] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:10:56,538 DEBUG [RS:4;jenkins-hbase4:43785] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:56,538 DEBUG [RS:4;jenkins-hbase4:43785] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43785,1690171856375' 2023-07-24 04:10:56,538 DEBUG [RS:4;jenkins-hbase4:43785] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:10:56,539 DEBUG [RS:4;jenkins-hbase4:43785] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:10:56,540 DEBUG [RS:4;jenkins-hbase4:43785] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:10:56,540 DEBUG [RS:4;jenkins-hbase4:43785] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:10:56,540 DEBUG [RS:4;jenkins-hbase4:43785] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:56,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:10:56,540 DEBUG [RS:4;jenkins-hbase4:43785] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43785,1690171856375' 2023-07-24 04:10:56,540 DEBUG [RS:4;jenkins-hbase4:43785] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:10:56,541 DEBUG [RS:4;jenkins-hbase4:43785] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:10:56,542 DEBUG [RS:4;jenkins-hbase4:43785] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:10:56,542 INFO [RS:4;jenkins-hbase4:43785] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 04:10:56,542 INFO [RS:4;jenkins-hbase4:43785] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 04:10:56,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:56,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:10:56,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:10:56,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:10:56,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:56,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:56,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:10:56,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:10:56,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 69 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:38870 deadline: 1690173056560, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:10:56,561 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:10:56,563 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:10:56,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:56,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:56,565 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:10:56,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:10:56,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:10:56,598 INFO [Listener at localhost/41307] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=479 (was 422) Potentially hanging thread: qtp739481640-705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0xe88b18-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp739481640-707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0xe88b18-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp739481640-710 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp739481640-709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43785Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7242e680-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1558060001-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x694ffec5-SendThread(127.0.0.1:59235) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:39717 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x5f30c910-SendThread(127.0.0.1:59235) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-66f07666-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1558060001-634 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1651579531.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0xe88b18-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:42399 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x694ffec5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/89307590.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0xdd71c56-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1558060001-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp739481640-704 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1651579531.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1558060001-635-acceptor-0@5cdf9472-ServerConnector@2be36392{HTTP/1.1, (http/1.1)}{0.0.0.0:34055} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1555149928_17 at /127.0.0.1:47128 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:4;jenkins-hbase4:43785 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca-prefix:jenkins-hbase4.apache.org,39717,1690171855814 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0xe88b18-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (612341689) connection to localhost/127.0.0.1:42399 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:4;jenkins-hbase4:43785-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp739481640-706-acceptor-0@7fe4d632-ServerConnector@6b49ad69{HTTP/1.1, (http/1.1)}{0.0.0.0:34231} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1558060001-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (612341689) connection to localhost/127.0.0.1:42399 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x5f30c910 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/89307590.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1558060001-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp739481640-708 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:39717-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1555149928_17 at /127.0.0.1:40232 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0xe88b18-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1558060001-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x694ffec5-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp739481640-711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1558060001-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1555149928_17 at /127.0.0.1:37086 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:39717Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x5f30c910-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=721 (was 678) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=558 (was 558), ProcessCount=176 (was 176), AvailableMemoryMB=6273 (was 6302) 2023-07-24 04:10:56,617 INFO [Listener at localhost/41307] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=479, OpenFileDescriptor=721, MaxFileDescriptor=60000, SystemLoadAverage=558, ProcessCount=176, AvailableMemoryMB=6273 2023-07-24 04:10:56,618 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(132): testDefaultNamespaceCreateAndAssign 2023-07-24 04:10:56,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:56,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:56,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:10:56,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:10:56,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:10:56,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:10:56,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:10:56,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:10:56,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:56,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:10:56,645 INFO [RS:4;jenkins-hbase4:43785] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43785%2C1690171856375, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43785,1690171856375, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:10:56,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:10:56,651 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:10:56,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:10:56,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:56,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:10:56,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:10:56,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:10:56,691 DEBUG [RS-EventLoopGroup-8-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:10:56,692 DEBUG [RS-EventLoopGroup-8-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:10:56,692 DEBUG [RS-EventLoopGroup-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:10:56,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:56,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:56,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:10:56,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:10:56,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 97 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:38870 deadline: 1690173056698, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:10:56,700 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:10:56,702 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:10:56,703 INFO [RS:4;jenkins-hbase4:43785] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43785,1690171856375/jenkins-hbase4.apache.org%2C43785%2C1690171856375.1690171856647 2023-07-24 04:10:56,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:56,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:56,704 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:10:56,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:10:56,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:10:56,706 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBasics(180): testDefaultNamespaceCreateAndAssign 2023-07-24 04:10:56,709 DEBUG [RS:4;jenkins-hbase4:43785] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK]] 2023-07-24 04:10:56,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'default', hbase.rsgroup.name => 'default'} 2023-07-24 04:10:56,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=default 2023-07-24 04:10:56,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-24 04:10:56,737 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 04:10:56,742 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; ModifyNamespaceProcedure, namespace=default in 24 msec 2023-07-24 04:10:56,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-24 04:10:56,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 04:10:56,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:10:56,861 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 04:10:56,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndAssign" procId is: 14 2023-07-24 04:10:56,870 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:56,871 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:10:56,871 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:10:56,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-24 04:10:56,875 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 04:10:56,877 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:56,878 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61 empty. 2023-07-24 04:10:56,879 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:56,879 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-24 04:10:56,928 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-24 04:10:56,930 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2f32cc237f66e91cc2a30181816b9a61, NAME => 'Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:56,947 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:56,947 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing 2f32cc237f66e91cc2a30181816b9a61, disabling compactions & flushes 2023-07-24 04:10:56,947 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. 2023-07-24 04:10:56,947 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. 2023-07-24 04:10:56,947 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. after waiting 0 ms 2023-07-24 04:10:56,947 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. 2023-07-24 04:10:56,947 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. 2023-07-24 04:10:56,947 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for 2f32cc237f66e91cc2a30181816b9a61: 2023-07-24 04:10:56,951 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 04:10:56,952 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690171856952"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171856952"}]},"ts":"1690171856952"} 2023-07-24 04:10:56,955 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 04:10:56,956 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 04:10:56,957 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171856956"}]},"ts":"1690171856956"} 2023-07-24 04:10:56,958 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-24 04:10:56,961 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:10:56,962 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:10:56,962 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:10:56,962 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:10:56,962 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 04:10:56,962 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:10:56,962 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=2f32cc237f66e91cc2a30181816b9a61, ASSIGN}] 2023-07-24 04:10:56,964 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=2f32cc237f66e91cc2a30181816b9a61, ASSIGN 2023-07-24 04:10:56,965 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=2f32cc237f66e91cc2a30181816b9a61, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41157,1690171852333; forceNewPlan=false, retain=false 2023-07-24 04:10:56,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-24 04:10:57,115 INFO [jenkins-hbase4:36883] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 04:10:57,117 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=2f32cc237f66e91cc2a30181816b9a61, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:57,117 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690171857117"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171857117"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171857117"}]},"ts":"1690171857117"} 2023-07-24 04:10:57,121 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE; OpenRegionProcedure 2f32cc237f66e91cc2a30181816b9a61, server=jenkins-hbase4.apache.org,41157,1690171852333}] 2023-07-24 04:10:57,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-24 04:10:57,279 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. 2023-07-24 04:10:57,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2f32cc237f66e91cc2a30181816b9a61, NAME => 'Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:10:57,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign 2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:57,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:57,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:57,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:57,282 INFO [StoreOpener-2f32cc237f66e91cc2a30181816b9a61-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:57,284 DEBUG [StoreOpener-2f32cc237f66e91cc2a30181816b9a61-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61/f 2023-07-24 04:10:57,284 DEBUG [StoreOpener-2f32cc237f66e91cc2a30181816b9a61-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61/f 2023-07-24 04:10:57,284 INFO [StoreOpener-2f32cc237f66e91cc2a30181816b9a61-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2f32cc237f66e91cc2a30181816b9a61 columnFamilyName f 2023-07-24 04:10:57,285 INFO [StoreOpener-2f32cc237f66e91cc2a30181816b9a61-1] regionserver.HStore(310): Store=2f32cc237f66e91cc2a30181816b9a61/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:57,286 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:57,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:57,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:57,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:57,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2f32cc237f66e91cc2a30181816b9a61; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10234341120, jitterRate=-0.046852707862854004}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:57,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2f32cc237f66e91cc2a30181816b9a61: 2023-07-24 04:10:57,296 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61., pid=16, masterSystemTime=1690171857274 2023-07-24 04:10:57,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. 2023-07-24 04:10:57,298 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. 2023-07-24 04:10:57,299 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=2f32cc237f66e91cc2a30181816b9a61, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:57,299 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690171857299"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171857299"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171857299"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171857299"}]},"ts":"1690171857299"} 2023-07-24 04:10:57,305 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=15 2023-07-24 04:10:57,305 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; OpenRegionProcedure 2f32cc237f66e91cc2a30181816b9a61, server=jenkins-hbase4.apache.org,41157,1690171852333 in 181 msec 2023-07-24 04:10:57,309 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=14 2023-07-24 04:10:57,310 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=14, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=2f32cc237f66e91cc2a30181816b9a61, ASSIGN in 343 msec 2023-07-24 04:10:57,311 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 04:10:57,312 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171857312"}]},"ts":"1690171857312"} 2023-07-24 04:10:57,314 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-24 04:10:57,317 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 04:10:57,320 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign in 460 msec 2023-07-24 04:10:57,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-24 04:10:57,476 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndAssign, procId: 14 completed 2023-07-24 04:10:57,477 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:10:57,483 DEBUG [Listener at localhost/41307] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:10:57,488 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38468, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:10:57,491 DEBUG [Listener at localhost/41307] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:10:57,495 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59730, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:10:57,496 DEBUG [Listener at localhost/41307] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:10:57,499 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52344, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:10:57,500 DEBUG [Listener at localhost/41307] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:10:57,502 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58888, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:10:57,506 INFO [Listener at localhost/41307] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndAssign 2023-07-24 04:10:57,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCreateAndAssign 2023-07-24 04:10:57,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=17, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:10:57,523 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171857522"}]},"ts":"1690171857522"} 2023-07-24 04:10:57,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-24 04:10:57,525 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-24 04:10:57,528 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testCreateAndAssign to state=DISABLING 2023-07-24 04:10:57,529 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=2f32cc237f66e91cc2a30181816b9a61, UNASSIGN}] 2023-07-24 04:10:57,531 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=17, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=2f32cc237f66e91cc2a30181816b9a61, UNASSIGN 2023-07-24 04:10:57,532 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=2f32cc237f66e91cc2a30181816b9a61, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:57,532 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690171857532"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171857532"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171857532"}]},"ts":"1690171857532"} 2023-07-24 04:10:57,535 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE; CloseRegionProcedure 2f32cc237f66e91cc2a30181816b9a61, server=jenkins-hbase4.apache.org,41157,1690171852333}] 2023-07-24 04:10:57,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-24 04:10:57,694 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:57,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2f32cc237f66e91cc2a30181816b9a61, disabling compactions & flushes 2023-07-24 04:10:57,695 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. 2023-07-24 04:10:57,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. 2023-07-24 04:10:57,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. after waiting 0 ms 2023-07-24 04:10:57,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. 2023-07-24 04:10:57,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:10:57,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61. 2023-07-24 04:10:57,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2f32cc237f66e91cc2a30181816b9a61: 2023-07-24 04:10:57,704 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:57,704 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=2f32cc237f66e91cc2a30181816b9a61, regionState=CLOSED 2023-07-24 04:10:57,705 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690171857704"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171857704"}]},"ts":"1690171857704"} 2023-07-24 04:10:57,709 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=18 2023-07-24 04:10:57,709 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; CloseRegionProcedure 2f32cc237f66e91cc2a30181816b9a61, server=jenkins-hbase4.apache.org,41157,1690171852333 in 172 msec 2023-07-24 04:10:57,712 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 04:10:57,712 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=2f32cc237f66e91cc2a30181816b9a61, UNASSIGN in 180 msec 2023-07-24 04:10:57,713 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171857713"}]},"ts":"1690171857713"} 2023-07-24 04:10:57,715 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-24 04:10:57,716 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testCreateAndAssign to state=DISABLED 2023-07-24 04:10:57,719 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign in 204 msec 2023-07-24 04:10:57,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-24 04:10:57,827 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndAssign, procId: 17 completed 2023-07-24 04:10:57,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCreateAndAssign 2023-07-24 04:10:57,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:10:57,842 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=20, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:10:57,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndAssign' from rsgroup 'default' 2023-07-24 04:10:57,844 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=20, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:10:57,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:57,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:10:57,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:10:57,850 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:57,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 04:10:57,853 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61/recovered.edits] 2023-07-24 04:10:57,865 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61/recovered.edits/4.seqid 2023-07-24 04:10:57,866 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndAssign/2f32cc237f66e91cc2a30181816b9a61 2023-07-24 04:10:57,867 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-24 04:10:57,870 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=20, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:10:57,901 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndAssign from hbase:meta 2023-07-24 04:10:57,947 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndAssign' descriptor. 2023-07-24 04:10:57,950 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=20, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:10:57,950 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndAssign' from region states. 2023-07-24 04:10:57,950 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171857950"}]},"ts":"9223372036854775807"} 2023-07-24 04:10:57,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 04:10:57,953 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 04:10:57,953 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 2f32cc237f66e91cc2a30181816b9a61, NAME => 'Group_testCreateAndAssign,,1690171856851.2f32cc237f66e91cc2a30181816b9a61.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 04:10:57,953 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndAssign' as deleted. 2023-07-24 04:10:57,953 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690171857953"}]},"ts":"9223372036854775807"} 2023-07-24 04:10:57,955 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndAssign state from META 2023-07-24 04:10:57,958 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=20, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:10:57,960 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign in 125 msec 2023-07-24 04:10:58,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 04:10:58,154 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndAssign, procId: 20 completed 2023-07-24 04:10:58,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:58,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:58,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:10:58,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:10:58,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:10:58,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:10:58,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:10:58,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:10:58,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:58,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:10:58,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:10:58,174 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:10:58,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:10:58,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:58,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:10:58,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:10:58,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:10:58,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:58,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:58,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:10:58,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:10:58,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 163 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173058188, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:10:58,189 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:10:58,191 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:10:58,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:58,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:58,192 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:10:58,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:10:58,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:10:58,211 INFO [Listener at localhost/41307] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=497 (was 479) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:42399 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-683287797_17 at /127.0.0.1:47140 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741841_1017] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-683287797_17 at /127.0.0.1:37092 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741841_1017] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe88b18-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741841_1017, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741841_1017, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741841_1017, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1541897825_17 at /127.0.0.1:46988 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe88b18-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-683287797_17 at /127.0.0.1:40240 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741841_1017] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca-prefix:jenkins-hbase4.apache.org,43785,1690171856375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=750 (was 721) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=538 (was 558), ProcessCount=176 (was 176), AvailableMemoryMB=6251 (was 6273) 2023-07-24 04:10:58,226 INFO [Listener at localhost/41307] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=497, OpenFileDescriptor=750, MaxFileDescriptor=60000, SystemLoadAverage=538, ProcessCount=176, AvailableMemoryMB=6250 2023-07-24 04:10:58,227 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(132): testCreateMultiRegion 2023-07-24 04:10:58,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:58,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:58,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:10:58,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:10:58,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:10:58,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:10:58,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:10:58,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:10:58,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:58,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:10:58,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:10:58,248 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:10:58,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:10:58,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:58,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:10:58,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:10:58,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:10:58,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:58,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:58,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:10:58,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:10:58,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 191 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173058262, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:10:58,263 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:10:58,265 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:10:58,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:10:58,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:10:58,267 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:10:58,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:10:58,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:10:58,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 04:10:58,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:10:58,275 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 04:10:58,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateMultiRegion" procId is: 21 2023-07-24 04:10:58,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 04:10:58,278 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:10:58,279 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:10:58,279 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:10:58,285 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 04:10:58,294 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:10:58,294 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:10:58,294 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34 2023-07-24 04:10:58,294 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:10:58,295 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:10:58,295 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36 2023-07-24 04:10:58,294 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695 2023-07-24 04:10:58,295 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:10:58,295 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e empty. 2023-07-24 04:10:58,295 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36 empty. 2023-07-24 04:10:58,296 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810 empty. 2023-07-24 04:10:58,296 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4 empty. 2023-07-24 04:10:58,296 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20 empty. 2023-07-24 04:10:58,296 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:10:58,296 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:10:58,296 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34 empty. 2023-07-24 04:10:58,296 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305 empty. 2023-07-24 04:10:58,297 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:10:58,297 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d 2023-07-24 04:10:58,297 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:10:58,298 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:10:58,298 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695 empty. 2023-07-24 04:10:58,298 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36 2023-07-24 04:10:58,298 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d empty. 2023-07-24 04:10:58,298 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:10:58,298 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34 2023-07-24 04:10:58,298 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b empty. 2023-07-24 04:10:58,298 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695 2023-07-24 04:10:58,299 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d 2023-07-24 04:10:58,299 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:10:58,299 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-24 04:10:58,325 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/.tabledesc/.tableinfo.0000000001 2023-07-24 04:10:58,327 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => 3c51243ece3758b803926a5484389e34, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:58,327 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5e996bc4a1d77d54cbb649199a886305, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:58,327 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => aeb78a3db3e6fd257ad80f2e5e0add6e, NAME => 'Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:58,377 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,377 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 5e996bc4a1d77d54cbb649199a886305, disabling compactions & flushes 2023-07-24 04:10:58,380 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. 2023-07-24 04:10:58,380 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. 2023-07-24 04:10:58,380 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. after waiting 0 ms 2023-07-24 04:10:58,380 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. 2023-07-24 04:10:58,380 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. 2023-07-24 04:10:58,380 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 5e996bc4a1d77d54cbb649199a886305: 2023-07-24 04:10:58,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 04:10:58,381 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => c16118052a59d7211967c6fd0222da36, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:58,386 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,386 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing 3c51243ece3758b803926a5484389e34, disabling compactions & flushes 2023-07-24 04:10:58,386 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. 2023-07-24 04:10:58,386 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. 2023-07-24 04:10:58,387 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. after waiting 0 ms 2023-07-24 04:10:58,387 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. 2023-07-24 04:10:58,387 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. 2023-07-24 04:10:58,387 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for 3c51243ece3758b803926a5484389e34: 2023-07-24 04:10:58,387 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => 16ae5bf2f9e3704e74a83b04208d4f20, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:58,390 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,391 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing aeb78a3db3e6fd257ad80f2e5e0add6e, disabling compactions & flushes 2023-07-24 04:10:58,391 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. 2023-07-24 04:10:58,391 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. 2023-07-24 04:10:58,391 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. after waiting 0 ms 2023-07-24 04:10:58,391 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. 2023-07-24 04:10:58,391 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. 2023-07-24 04:10:58,391 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for aeb78a3db3e6fd257ad80f2e5e0add6e: 2023-07-24 04:10:58,391 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => e871ccee14e30454c27e7760f7695695, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:58,423 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,424 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing c16118052a59d7211967c6fd0222da36, disabling compactions & flushes 2023-07-24 04:10:58,424 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. 2023-07-24 04:10:58,424 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. 2023-07-24 04:10:58,424 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. after waiting 0 ms 2023-07-24 04:10:58,424 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. 2023-07-24 04:10:58,425 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. 2023-07-24 04:10:58,425 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for c16118052a59d7211967c6fd0222da36: 2023-07-24 04:10:58,426 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 49a050ad90a5d5c994e650d5d4c306c4, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:58,446 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,447 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing 16ae5bf2f9e3704e74a83b04208d4f20, disabling compactions & flushes 2023-07-24 04:10:58,447 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. 2023-07-24 04:10:58,447 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. 2023-07-24 04:10:58,448 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. after waiting 0 ms 2023-07-24 04:10:58,448 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. 2023-07-24 04:10:58,448 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. 2023-07-24 04:10:58,448 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for 16ae5bf2f9e3704e74a83b04208d4f20: 2023-07-24 04:10:58,448 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => cd26f163dcad406136eb9bddca0f6810, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:58,470 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,471 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing e871ccee14e30454c27e7760f7695695, disabling compactions & flushes 2023-07-24 04:10:58,471 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. 2023-07-24 04:10:58,471 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. 2023-07-24 04:10:58,471 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. after waiting 0 ms 2023-07-24 04:10:58,471 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. 2023-07-24 04:10:58,471 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. 2023-07-24 04:10:58,471 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for e871ccee14e30454c27e7760f7695695: 2023-07-24 04:10:58,472 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => 60834f7a98afcda3b9e986ee2c6f382b, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:58,491 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,492 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 49a050ad90a5d5c994e650d5d4c306c4, disabling compactions & flushes 2023-07-24 04:10:58,492 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. 2023-07-24 04:10:58,492 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. 2023-07-24 04:10:58,492 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. after waiting 0 ms 2023-07-24 04:10:58,492 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. 2023-07-24 04:10:58,492 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. 2023-07-24 04:10:58,492 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 49a050ad90a5d5c994e650d5d4c306c4: 2023-07-24 04:10:58,493 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 33866eca882741fef69f1046d9617b5d, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:10:58,503 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,504 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing cd26f163dcad406136eb9bddca0f6810, disabling compactions & flushes 2023-07-24 04:10:58,504 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. 2023-07-24 04:10:58,504 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. 2023-07-24 04:10:58,504 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. after waiting 0 ms 2023-07-24 04:10:58,505 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. 2023-07-24 04:10:58,505 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. 2023-07-24 04:10:58,505 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for cd26f163dcad406136eb9bddca0f6810: 2023-07-24 04:10:58,510 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,510 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing 60834f7a98afcda3b9e986ee2c6f382b, disabling compactions & flushes 2023-07-24 04:10:58,511 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. 2023-07-24 04:10:58,511 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. 2023-07-24 04:10:58,511 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. after waiting 0 ms 2023-07-24 04:10:58,511 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. 2023-07-24 04:10:58,511 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. 2023-07-24 04:10:58,511 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for 60834f7a98afcda3b9e986ee2c6f382b: 2023-07-24 04:10:58,519 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,519 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 33866eca882741fef69f1046d9617b5d, disabling compactions & flushes 2023-07-24 04:10:58,519 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. 2023-07-24 04:10:58,520 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. 2023-07-24 04:10:58,520 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. after waiting 0 ms 2023-07-24 04:10:58,520 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. 2023-07-24 04:10:58,520 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. 2023-07-24 04:10:58,520 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 33866eca882741fef69f1046d9617b5d: 2023-07-24 04:10:58,524 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 04:10:58,525 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858525"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171858525"}]},"ts":"1690171858525"} 2023-07-24 04:10:58,525 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690171858271.3c51243ece3758b803926a5484389e34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858525"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171858525"}]},"ts":"1690171858525"} 2023-07-24 04:10:58,525 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690171858525"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171858525"}]},"ts":"1690171858525"} 2023-07-24 04:10:58,525 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858525"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171858525"}]},"ts":"1690171858525"} 2023-07-24 04:10:58,525 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858525"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171858525"}]},"ts":"1690171858525"} 2023-07-24 04:10:58,526 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690171858271.e871ccee14e30454c27e7760f7695695.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858525"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171858525"}]},"ts":"1690171858525"} 2023-07-24 04:10:58,526 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858525"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171858525"}]},"ts":"1690171858525"} 2023-07-24 04:10:58,526 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858525"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171858525"}]},"ts":"1690171858525"} 2023-07-24 04:10:58,526 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858525"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171858525"}]},"ts":"1690171858525"} 2023-07-24 04:10:58,526 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690171858271.33866eca882741fef69f1046d9617b5d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690171858525"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171858525"}]},"ts":"1690171858525"} 2023-07-24 04:10:58,530 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 10 regions to meta. 2023-07-24 04:10:58,532 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 04:10:58,532 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171858532"}]},"ts":"1690171858532"} 2023-07-24 04:10:58,533 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLING in hbase:meta 2023-07-24 04:10:58,539 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:10:58,539 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:10:58,539 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:10:58,539 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:10:58,539 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 04:10:58,539 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:10:58,540 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aeb78a3db3e6fd257ad80f2e5e0add6e, ASSIGN}, {pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5e996bc4a1d77d54cbb649199a886305, ASSIGN}, {pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3c51243ece3758b803926a5484389e34, ASSIGN}, {pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c16118052a59d7211967c6fd0222da36, ASSIGN}, {pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=16ae5bf2f9e3704e74a83b04208d4f20, ASSIGN}, {pid=27, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e871ccee14e30454c27e7760f7695695, ASSIGN}, {pid=28, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=49a050ad90a5d5c994e650d5d4c306c4, ASSIGN}, {pid=29, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cd26f163dcad406136eb9bddca0f6810, ASSIGN}, {pid=30, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=60834f7a98afcda3b9e986ee2c6f382b, ASSIGN}, {pid=31, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=33866eca882741fef69f1046d9617b5d, ASSIGN}] 2023-07-24 04:10:58,543 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5e996bc4a1d77d54cbb649199a886305, ASSIGN 2023-07-24 04:10:58,543 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aeb78a3db3e6fd257ad80f2e5e0add6e, ASSIGN 2023-07-24 04:10:58,544 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3c51243ece3758b803926a5484389e34, ASSIGN 2023-07-24 04:10:58,545 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c16118052a59d7211967c6fd0222da36, ASSIGN 2023-07-24 04:10:58,546 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5e996bc4a1d77d54cbb649199a886305, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39717,1690171855814; forceNewPlan=false, retain=false 2023-07-24 04:10:58,546 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aeb78a3db3e6fd257ad80f2e5e0add6e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43785,1690171856375; forceNewPlan=false, retain=false 2023-07-24 04:10:58,547 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3c51243ece3758b803926a5484389e34, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41157,1690171852333; forceNewPlan=false, retain=false 2023-07-24 04:10:58,547 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c16118052a59d7211967c6fd0222da36, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37679,1690171852273; forceNewPlan=false, retain=false 2023-07-24 04:10:58,548 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=33866eca882741fef69f1046d9617b5d, ASSIGN 2023-07-24 04:10:58,548 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=60834f7a98afcda3b9e986ee2c6f382b, ASSIGN 2023-07-24 04:10:58,548 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cd26f163dcad406136eb9bddca0f6810, ASSIGN 2023-07-24 04:10:58,549 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=49a050ad90a5d5c994e650d5d4c306c4, ASSIGN 2023-07-24 04:10:58,549 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e871ccee14e30454c27e7760f7695695, ASSIGN 2023-07-24 04:10:58,549 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=31, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=33866eca882741fef69f1046d9617b5d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41157,1690171852333; forceNewPlan=false, retain=false 2023-07-24 04:10:58,550 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=30, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=60834f7a98afcda3b9e986ee2c6f382b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39717,1690171855814; forceNewPlan=false, retain=false 2023-07-24 04:10:58,550 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=29, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cd26f163dcad406136eb9bddca0f6810, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37679,1690171852273; forceNewPlan=false, retain=false 2023-07-24 04:10:58,551 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=28, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=49a050ad90a5d5c994e650d5d4c306c4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43785,1690171856375; forceNewPlan=false, retain=false 2023-07-24 04:10:58,551 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=16ae5bf2f9e3704e74a83b04208d4f20, ASSIGN 2023-07-24 04:10:58,551 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=27, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e871ccee14e30454c27e7760f7695695, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41157,1690171852333; forceNewPlan=false, retain=false 2023-07-24 04:10:58,552 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=16ae5bf2f9e3704e74a83b04208d4f20, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39717,1690171855814; forceNewPlan=false, retain=false 2023-07-24 04:10:58,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 04:10:58,697 INFO [jenkins-hbase4:36883] balancer.BaseLoadBalancer(1545): Reassigned 10 regions. 10 retained the pre-restart assignment. 2023-07-24 04:10:58,702 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=33866eca882741fef69f1046d9617b5d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:58,702 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=49a050ad90a5d5c994e650d5d4c306c4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:58,702 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=e871ccee14e30454c27e7760f7695695, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:58,702 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=aeb78a3db3e6fd257ad80f2e5e0add6e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:58,702 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690171858271.33866eca882741fef69f1046d9617b5d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690171858701"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171858701"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171858701"}]},"ts":"1690171858701"} 2023-07-24 04:10:58,702 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690171858271.e871ccee14e30454c27e7760f7695695.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858702"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171858702"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171858702"}]},"ts":"1690171858702"} 2023-07-24 04:10:58,702 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=3c51243ece3758b803926a5484389e34, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:58,702 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690171858701"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171858701"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171858701"}]},"ts":"1690171858701"} 2023-07-24 04:10:58,702 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858701"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171858701"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171858701"}]},"ts":"1690171858701"} 2023-07-24 04:10:58,702 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690171858271.3c51243ece3758b803926a5484389e34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858702"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171858702"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171858702"}]},"ts":"1690171858702"} 2023-07-24 04:10:58,704 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=31, state=RUNNABLE; OpenRegionProcedure 33866eca882741fef69f1046d9617b5d, server=jenkins-hbase4.apache.org,41157,1690171852333}] 2023-07-24 04:10:58,705 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=27, state=RUNNABLE; OpenRegionProcedure e871ccee14e30454c27e7760f7695695, server=jenkins-hbase4.apache.org,41157,1690171852333}] 2023-07-24 04:10:58,708 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=22, state=RUNNABLE; OpenRegionProcedure aeb78a3db3e6fd257ad80f2e5e0add6e, server=jenkins-hbase4.apache.org,43785,1690171856375}] 2023-07-24 04:10:58,711 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=60834f7a98afcda3b9e986ee2c6f382b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:58,711 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858711"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171858711"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171858711"}]},"ts":"1690171858711"} 2023-07-24 04:10:58,712 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=16ae5bf2f9e3704e74a83b04208d4f20, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:58,712 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858712"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171858712"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171858712"}]},"ts":"1690171858712"} 2023-07-24 04:10:58,713 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=28, state=RUNNABLE; OpenRegionProcedure 49a050ad90a5d5c994e650d5d4c306c4, server=jenkins-hbase4.apache.org,43785,1690171856375}] 2023-07-24 04:10:58,715 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=5e996bc4a1d77d54cbb649199a886305, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:58,715 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858715"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171858715"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171858715"}]},"ts":"1690171858715"} 2023-07-24 04:10:58,715 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=24, state=RUNNABLE; OpenRegionProcedure 3c51243ece3758b803926a5484389e34, server=jenkins-hbase4.apache.org,41157,1690171852333}] 2023-07-24 04:10:58,717 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=30, state=RUNNABLE; OpenRegionProcedure 60834f7a98afcda3b9e986ee2c6f382b, server=jenkins-hbase4.apache.org,39717,1690171855814}] 2023-07-24 04:10:58,718 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=26, state=RUNNABLE; OpenRegionProcedure 16ae5bf2f9e3704e74a83b04208d4f20, server=jenkins-hbase4.apache.org,39717,1690171855814}] 2023-07-24 04:10:58,720 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=23, state=RUNNABLE; OpenRegionProcedure 5e996bc4a1d77d54cbb649199a886305, server=jenkins-hbase4.apache.org,39717,1690171855814}] 2023-07-24 04:10:58,720 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=cd26f163dcad406136eb9bddca0f6810, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:58,720 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858720"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171858720"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171858720"}]},"ts":"1690171858720"} 2023-07-24 04:10:58,723 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=29, state=RUNNABLE; OpenRegionProcedure cd26f163dcad406136eb9bddca0f6810, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:10:58,723 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=c16118052a59d7211967c6fd0222da36, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:58,724 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858723"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171858723"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171858723"}]},"ts":"1690171858723"} 2023-07-24 04:10:58,726 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=25, state=RUNNABLE; OpenRegionProcedure c16118052a59d7211967c6fd0222da36, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:10:58,864 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:58,864 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:10:58,865 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. 2023-07-24 04:10:58,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3c51243ece3758b803926a5484389e34, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'} 2023-07-24 04:10:58,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 3c51243ece3758b803926a5484389e34 2023-07-24 04:10:58,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3c51243ece3758b803926a5484389e34 2023-07-24 04:10:58,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3c51243ece3758b803926a5484389e34 2023-07-24 04:10:58,866 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58898, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:10:58,867 INFO [StoreOpener-3c51243ece3758b803926a5484389e34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3c51243ece3758b803926a5484389e34 2023-07-24 04:10:58,870 DEBUG [StoreOpener-3c51243ece3758b803926a5484389e34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34/f 2023-07-24 04:10:58,870 DEBUG [StoreOpener-3c51243ece3758b803926a5484389e34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34/f 2023-07-24 04:10:58,870 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:58,872 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:10:58,872 INFO [StoreOpener-3c51243ece3758b803926a5484389e34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3c51243ece3758b803926a5484389e34 columnFamilyName f 2023-07-24 04:10:58,873 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. 2023-07-24 04:10:58,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aeb78a3db3e6fd257ad80f2e5e0add6e, NAME => 'Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'} 2023-07-24 04:10:58,873 INFO [StoreOpener-3c51243ece3758b803926a5484389e34-1] regionserver.HStore(310): Store=3c51243ece3758b803926a5484389e34/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:58,874 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59746, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:10:58,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:10:58,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:10:58,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:10:58,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34 2023-07-24 04:10:58,876 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34 2023-07-24 04:10:58,876 INFO [StoreOpener-aeb78a3db3e6fd257ad80f2e5e0add6e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:10:58,879 DEBUG [StoreOpener-aeb78a3db3e6fd257ad80f2e5e0add6e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e/f 2023-07-24 04:10:58,879 DEBUG [StoreOpener-aeb78a3db3e6fd257ad80f2e5e0add6e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e/f 2023-07-24 04:10:58,879 INFO [StoreOpener-aeb78a3db3e6fd257ad80f2e5e0add6e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aeb78a3db3e6fd257ad80f2e5e0add6e columnFamilyName f 2023-07-24 04:10:58,880 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. 2023-07-24 04:10:58,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 60834f7a98afcda3b9e986ee2c6f382b, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'} 2023-07-24 04:10:58,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:10:58,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:10:58,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:10:58,881 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. 2023-07-24 04:10:58,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c16118052a59d7211967c6fd0222da36, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'} 2023-07-24 04:10:58,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion c16118052a59d7211967c6fd0222da36 2023-07-24 04:10:58,883 INFO [StoreOpener-aeb78a3db3e6fd257ad80f2e5e0add6e-1] regionserver.HStore(310): Store=aeb78a3db3e6fd257ad80f2e5e0add6e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:58,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c16118052a59d7211967c6fd0222da36 2023-07-24 04:10:58,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c16118052a59d7211967c6fd0222da36 2023-07-24 04:10:58,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3c51243ece3758b803926a5484389e34 2023-07-24 04:10:58,884 INFO [StoreOpener-60834f7a98afcda3b9e986ee2c6f382b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:10:58,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:10:58,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:10:58,886 INFO [StoreOpener-c16118052a59d7211967c6fd0222da36-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c16118052a59d7211967c6fd0222da36 2023-07-24 04:10:58,886 DEBUG [StoreOpener-60834f7a98afcda3b9e986ee2c6f382b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b/f 2023-07-24 04:10:58,886 DEBUG [StoreOpener-60834f7a98afcda3b9e986ee2c6f382b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b/f 2023-07-24 04:10:58,887 INFO [StoreOpener-60834f7a98afcda3b9e986ee2c6f382b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 60834f7a98afcda3b9e986ee2c6f382b columnFamilyName f 2023-07-24 04:10:58,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 04:10:58,888 INFO [StoreOpener-60834f7a98afcda3b9e986ee2c6f382b-1] regionserver.HStore(310): Store=60834f7a98afcda3b9e986ee2c6f382b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:58,890 DEBUG [StoreOpener-c16118052a59d7211967c6fd0222da36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36/f 2023-07-24 04:10:58,890 DEBUG [StoreOpener-c16118052a59d7211967c6fd0222da36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36/f 2023-07-24 04:10:58,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:10:58,890 INFO [StoreOpener-c16118052a59d7211967c6fd0222da36-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c16118052a59d7211967c6fd0222da36 columnFamilyName f 2023-07-24 04:10:58,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:10:58,892 INFO [StoreOpener-c16118052a59d7211967c6fd0222da36-1] regionserver.HStore(310): Store=c16118052a59d7211967c6fd0222da36/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:58,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36 2023-07-24 04:10:58,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36 2023-07-24 04:10:58,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:10:58,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:58,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:10:58,899 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3c51243ece3758b803926a5484389e34; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11758697760, jitterRate=0.09511406719684601}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:58,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3c51243ece3758b803926a5484389e34: 2023-07-24 04:10:58,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c16118052a59d7211967c6fd0222da36 2023-07-24 04:10:58,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:58,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34., pid=36, masterSystemTime=1690171858857 2023-07-24 04:10:58,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 60834f7a98afcda3b9e986ee2c6f382b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10900437440, jitterRate=0.015182346105575562}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:58,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 60834f7a98afcda3b9e986ee2c6f382b: 2023-07-24 04:10:58,906 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b., pid=37, masterSystemTime=1690171858870 2023-07-24 04:10:58,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:58,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. 2023-07-24 04:10:58,909 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. 2023-07-24 04:10:58,909 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. 2023-07-24 04:10:58,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 33866eca882741fef69f1046d9617b5d, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''} 2023-07-24 04:10:58,909 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aeb78a3db3e6fd257ad80f2e5e0add6e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11577136000, jitterRate=0.07820481061935425}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:58,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aeb78a3db3e6fd257ad80f2e5e0add6e: 2023-07-24 04:10:58,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 33866eca882741fef69f1046d9617b5d 2023-07-24 04:10:58,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 33866eca882741fef69f1046d9617b5d 2023-07-24 04:10:58,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 33866eca882741fef69f1046d9617b5d 2023-07-24 04:10:58,910 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:58,911 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c16118052a59d7211967c6fd0222da36; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9852421920, jitterRate=-0.08242170512676239}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:58,911 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=3c51243ece3758b803926a5484389e34, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:58,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. 2023-07-24 04:10:58,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c16118052a59d7211967c6fd0222da36: 2023-07-24 04:10:58,912 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. 2023-07-24 04:10:58,911 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690171858271.3c51243ece3758b803926a5484389e34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858911"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171858911"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171858911"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171858911"}]},"ts":"1690171858911"} 2023-07-24 04:10:58,912 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. 2023-07-24 04:10:58,912 INFO [StoreOpener-33866eca882741fef69f1046d9617b5d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 33866eca882741fef69f1046d9617b5d 2023-07-24 04:10:58,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5e996bc4a1d77d54cbb649199a886305, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('} 2023-07-24 04:10:58,912 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=60834f7a98afcda3b9e986ee2c6f382b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:58,914 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858912"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171858912"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171858912"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171858912"}]},"ts":"1690171858912"} 2023-07-24 04:10:58,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:10:58,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:10:58,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:10:58,917 DEBUG [StoreOpener-33866eca882741fef69f1046d9617b5d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d/f 2023-07-24 04:10:58,918 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36., pid=41, masterSystemTime=1690171858875 2023-07-24 04:10:58,919 INFO [StoreOpener-5e996bc4a1d77d54cbb649199a886305-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:10:58,918 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e., pid=34, masterSystemTime=1690171858864 2023-07-24 04:10:58,919 DEBUG [StoreOpener-33866eca882741fef69f1046d9617b5d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d/f 2023-07-24 04:10:58,925 INFO [StoreOpener-33866eca882741fef69f1046d9617b5d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 33866eca882741fef69f1046d9617b5d columnFamilyName f 2023-07-24 04:10:58,926 DEBUG [StoreOpener-5e996bc4a1d77d54cbb649199a886305-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305/f 2023-07-24 04:10:58,927 DEBUG [StoreOpener-5e996bc4a1d77d54cbb649199a886305-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305/f 2023-07-24 04:10:58,927 INFO [StoreOpener-33866eca882741fef69f1046d9617b5d-1] regionserver.HStore(310): Store=33866eca882741fef69f1046d9617b5d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:58,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d 2023-07-24 04:10:58,929 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=24 2023-07-24 04:10:58,929 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=24, state=SUCCESS; OpenRegionProcedure 3c51243ece3758b803926a5484389e34, server=jenkins-hbase4.apache.org,41157,1690171852333 in 201 msec 2023-07-24 04:10:58,929 INFO [StoreOpener-5e996bc4a1d77d54cbb649199a886305-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5e996bc4a1d77d54cbb649199a886305 columnFamilyName f 2023-07-24 04:10:58,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. 2023-07-24 04:10:58,929 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. 2023-07-24 04:10:58,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. 2023-07-24 04:10:58,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cd26f163dcad406136eb9bddca0f6810, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'} 2023-07-24 04:10:58,930 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=c16118052a59d7211967c6fd0222da36, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:58,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:10:58,932 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858930"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171858930"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171858930"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171858930"}]},"ts":"1690171858930"} 2023-07-24 04:10:58,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d 2023-07-24 04:10:58,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. 2023-07-24 04:10:58,933 INFO [StoreOpener-5e996bc4a1d77d54cbb649199a886305-1] regionserver.HStore(310): Store=5e996bc4a1d77d54cbb649199a886305/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:58,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:10:58,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:10:58,934 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. 2023-07-24 04:10:58,935 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=aeb78a3db3e6fd257ad80f2e5e0add6e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:58,935 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. 2023-07-24 04:10:58,935 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690171858935"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171858935"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171858935"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171858935"}]},"ts":"1690171858935"} 2023-07-24 04:10:58,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 49a050ad90a5d5c994e650d5d4c306c4, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'} 2023-07-24 04:10:58,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:10:58,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:10:58,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:10:58,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:10:58,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=30 2023-07-24 04:10:58,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=30, state=SUCCESS; OpenRegionProcedure 60834f7a98afcda3b9e986ee2c6f382b, server=jenkins-hbase4.apache.org,39717,1690171855814 in 208 msec 2023-07-24 04:10:58,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:10:58,938 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3c51243ece3758b803926a5484389e34, ASSIGN in 389 msec 2023-07-24 04:10:58,940 INFO [StoreOpener-cd26f163dcad406136eb9bddca0f6810-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:10:58,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 33866eca882741fef69f1046d9617b5d 2023-07-24 04:10:58,941 INFO [StoreOpener-49a050ad90a5d5c994e650d5d4c306c4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:10:58,941 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=60834f7a98afcda3b9e986ee2c6f382b, ASSIGN in 397 msec 2023-07-24 04:10:58,942 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=25 2023-07-24 04:10:58,942 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=25, state=SUCCESS; OpenRegionProcedure c16118052a59d7211967c6fd0222da36, server=jenkins-hbase4.apache.org,37679,1690171852273 in 210 msec 2023-07-24 04:10:58,943 DEBUG [StoreOpener-49a050ad90a5d5c994e650d5d4c306c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4/f 2023-07-24 04:10:58,944 DEBUG [StoreOpener-49a050ad90a5d5c994e650d5d4c306c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4/f 2023-07-24 04:10:58,944 DEBUG [StoreOpener-cd26f163dcad406136eb9bddca0f6810-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810/f 2023-07-24 04:10:58,944 DEBUG [StoreOpener-cd26f163dcad406136eb9bddca0f6810-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810/f 2023-07-24 04:10:58,944 INFO [StoreOpener-49a050ad90a5d5c994e650d5d4c306c4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 49a050ad90a5d5c994e650d5d4c306c4 columnFamilyName f 2023-07-24 04:10:58,944 INFO [StoreOpener-cd26f163dcad406136eb9bddca0f6810-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cd26f163dcad406136eb9bddca0f6810 columnFamilyName f 2023-07-24 04:10:58,945 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=22 2023-07-24 04:10:58,945 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=22, state=SUCCESS; OpenRegionProcedure aeb78a3db3e6fd257ad80f2e5e0add6e, server=jenkins-hbase4.apache.org,43785,1690171856375 in 231 msec 2023-07-24 04:10:58,945 INFO [StoreOpener-49a050ad90a5d5c994e650d5d4c306c4-1] regionserver.HStore(310): Store=49a050ad90a5d5c994e650d5d4c306c4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:58,946 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c16118052a59d7211967c6fd0222da36, ASSIGN in 402 msec 2023-07-24 04:10:58,946 INFO [StoreOpener-cd26f163dcad406136eb9bddca0f6810-1] regionserver.HStore(310): Store=cd26f163dcad406136eb9bddca0f6810/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:58,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:58,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:10:58,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:10:58,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 33866eca882741fef69f1046d9617b5d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10208904640, jitterRate=-0.04922166466712952}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:58,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 33866eca882741fef69f1046d9617b5d: 2023-07-24 04:10:58,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:10:58,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:10:58,948 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aeb78a3db3e6fd257ad80f2e5e0add6e, ASSIGN in 405 msec 2023-07-24 04:10:58,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:10:58,948 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d., pid=32, masterSystemTime=1690171858857 2023-07-24 04:10:58,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. 2023-07-24 04:10:58,952 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. 2023-07-24 04:10:58,952 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. 2023-07-24 04:10:58,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e871ccee14e30454c27e7760f7695695, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'} 2023-07-24 04:10:58,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion e871ccee14e30454c27e7760f7695695 2023-07-24 04:10:58,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e871ccee14e30454c27e7760f7695695 2023-07-24 04:10:58,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e871ccee14e30454c27e7760f7695695 2023-07-24 04:10:58,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:58,954 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=33866eca882741fef69f1046d9617b5d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:58,954 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690171858271.33866eca882741fef69f1046d9617b5d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690171858953"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171858953"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171858953"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171858953"}]},"ts":"1690171858953"} 2023-07-24 04:10:58,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:10:58,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5e996bc4a1d77d54cbb649199a886305; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11555528000, jitterRate=0.07619240880012512}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:58,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5e996bc4a1d77d54cbb649199a886305: 2023-07-24 04:10:58,955 INFO [StoreOpener-e871ccee14e30454c27e7760f7695695-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e871ccee14e30454c27e7760f7695695 2023-07-24 04:10:58,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:10:58,957 DEBUG [StoreOpener-e871ccee14e30454c27e7760f7695695-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695/f 2023-07-24 04:10:58,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305., pid=39, masterSystemTime=1690171858870 2023-07-24 04:10:58,958 DEBUG [StoreOpener-e871ccee14e30454c27e7760f7695695-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695/f 2023-07-24 04:10:58,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:58,959 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 49a050ad90a5d5c994e650d5d4c306c4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9709252800, jitterRate=-0.09575536847114563}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:58,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 49a050ad90a5d5c994e650d5d4c306c4: 2023-07-24 04:10:58,960 INFO [StoreOpener-e871ccee14e30454c27e7760f7695695-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e871ccee14e30454c27e7760f7695695 columnFamilyName f 2023-07-24 04:10:58,960 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4., pid=35, masterSystemTime=1690171858864 2023-07-24 04:10:58,960 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:58,961 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cd26f163dcad406136eb9bddca0f6810; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9534587520, jitterRate=-0.11202234029769897}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:58,961 INFO [StoreOpener-e871ccee14e30454c27e7760f7695695-1] regionserver.HStore(310): Store=e871ccee14e30454c27e7760f7695695/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:58,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cd26f163dcad406136eb9bddca0f6810: 2023-07-24 04:10:58,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. 2023-07-24 04:10:58,962 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810., pid=40, masterSystemTime=1690171858875 2023-07-24 04:10:58,963 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. 2023-07-24 04:10:58,963 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. 2023-07-24 04:10:58,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695 2023-07-24 04:10:58,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 16ae5bf2f9e3704e74a83b04208d4f20, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'} 2023-07-24 04:10:58,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. 2023-07-24 04:10:58,964 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=5e996bc4a1d77d54cbb649199a886305, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:58,964 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. 2023-07-24 04:10:58,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:10:58,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695 2023-07-24 04:10:58,964 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858964"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171858964"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171858964"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171858964"}]},"ts":"1690171858964"} 2023-07-24 04:10:58,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:10:58,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:10:58,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:10:58,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=31 2023-07-24 04:10:58,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=31, state=SUCCESS; OpenRegionProcedure 33866eca882741fef69f1046d9617b5d, server=jenkins-hbase4.apache.org,41157,1690171852333 in 254 msec 2023-07-24 04:10:58,966 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=49a050ad90a5d5c994e650d5d4c306c4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:58,966 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858966"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171858966"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171858966"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171858966"}]},"ts":"1690171858966"} 2023-07-24 04:10:58,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. 2023-07-24 04:10:58,967 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. 2023-07-24 04:10:58,968 INFO [StoreOpener-16ae5bf2f9e3704e74a83b04208d4f20-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:10:58,969 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=cd26f163dcad406136eb9bddca0f6810, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:58,970 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171858969"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171858969"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171858969"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171858969"}]},"ts":"1690171858969"} 2023-07-24 04:10:58,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e871ccee14e30454c27e7760f7695695 2023-07-24 04:10:58,975 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=33866eca882741fef69f1046d9617b5d, ASSIGN in 425 msec 2023-07-24 04:10:58,976 DEBUG [StoreOpener-16ae5bf2f9e3704e74a83b04208d4f20-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20/f 2023-07-24 04:10:58,977 DEBUG [StoreOpener-16ae5bf2f9e3704e74a83b04208d4f20-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20/f 2023-07-24 04:10:58,978 INFO [StoreOpener-16ae5bf2f9e3704e74a83b04208d4f20-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 16ae5bf2f9e3704e74a83b04208d4f20 columnFamilyName f 2023-07-24 04:10:58,978 INFO [StoreOpener-16ae5bf2f9e3704e74a83b04208d4f20-1] regionserver.HStore(310): Store=16ae5bf2f9e3704e74a83b04208d4f20/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:10:58,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:10:58,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:10:58,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:10:58,987 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=23 2023-07-24 04:10:58,987 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=23, state=SUCCESS; OpenRegionProcedure 5e996bc4a1d77d54cbb649199a886305, server=jenkins-hbase4.apache.org,39717,1690171855814 in 248 msec 2023-07-24 04:10:58,990 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=28 2023-07-24 04:10:58,990 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=28, state=SUCCESS; OpenRegionProcedure 49a050ad90a5d5c994e650d5d4c306c4, server=jenkins-hbase4.apache.org,43785,1690171856375 in 263 msec 2023-07-24 04:10:58,991 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=29 2023-07-24 04:10:58,991 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=29, state=SUCCESS; OpenRegionProcedure cd26f163dcad406136eb9bddca0f6810, server=jenkins-hbase4.apache.org,37679,1690171852273 in 254 msec 2023-07-24 04:10:58,992 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5e996bc4a1d77d54cbb649199a886305, ASSIGN in 447 msec 2023-07-24 04:10:58,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:58,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:10:58,999 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e871ccee14e30454c27e7760f7695695; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10436978240, jitterRate=-0.027980655431747437}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:59,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e871ccee14e30454c27e7760f7695695: 2023-07-24 04:10:59,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 16ae5bf2f9e3704e74a83b04208d4f20; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10582767360, jitterRate=-0.014402985572814941}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:10:59,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 16ae5bf2f9e3704e74a83b04208d4f20: 2023-07-24 04:10:59,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695., pid=33, masterSystemTime=1690171858857 2023-07-24 04:10:59,001 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=49a050ad90a5d5c994e650d5d4c306c4, ASSIGN in 450 msec 2023-07-24 04:10:59,001 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cd26f163dcad406136eb9bddca0f6810, ASSIGN in 451 msec 2023-07-24 04:10:59,002 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20., pid=38, masterSystemTime=1690171858870 2023-07-24 04:10:59,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. 2023-07-24 04:10:59,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. 2023-07-24 04:10:59,005 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=e871ccee14e30454c27e7760f7695695, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:59,006 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690171858271.e871ccee14e30454c27e7760f7695695.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859005"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171859005"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171859005"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171859005"}]},"ts":"1690171859005"} 2023-07-24 04:10:59,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. 2023-07-24 04:10:59,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. 2023-07-24 04:10:59,007 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=16ae5bf2f9e3704e74a83b04208d4f20, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:59,007 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859007"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171859007"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171859007"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171859007"}]},"ts":"1690171859007"} 2023-07-24 04:10:59,013 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=27 2023-07-24 04:10:59,013 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=27, state=SUCCESS; OpenRegionProcedure e871ccee14e30454c27e7760f7695695, server=jenkins-hbase4.apache.org,41157,1690171852333 in 304 msec 2023-07-24 04:10:59,015 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=26 2023-07-24 04:10:59,015 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=26, state=SUCCESS; OpenRegionProcedure 16ae5bf2f9e3704e74a83b04208d4f20, server=jenkins-hbase4.apache.org,39717,1690171855814 in 293 msec 2023-07-24 04:10:59,015 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e871ccee14e30454c27e7760f7695695, ASSIGN in 473 msec 2023-07-24 04:10:59,017 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=21 2023-07-24 04:10:59,017 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=16ae5bf2f9e3704e74a83b04208d4f20, ASSIGN in 475 msec 2023-07-24 04:10:59,018 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 04:10:59,019 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171859019"}]},"ts":"1690171859019"} 2023-07-24 04:10:59,020 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLED in hbase:meta 2023-07-24 04:10:59,023 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 04:10:59,026 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion in 751 msec 2023-07-24 04:10:59,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 04:10:59,389 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateMultiRegion, procId: 21 completed 2023-07-24 04:10:59,389 DEBUG [Listener at localhost/41307] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateMultiRegion get assigned. Timeout = 60000ms 2023-07-24 04:10:59,390 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:10:59,398 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateMultiRegion assigned to meta. Checking AM states. 2023-07-24 04:10:59,399 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:10:59,399 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateMultiRegion assigned. 2023-07-24 04:10:59,401 INFO [Listener at localhost/41307] client.HBaseAdmin$15(890): Started disable of Group_testCreateMultiRegion 2023-07-24 04:10:59,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCreateMultiRegion 2023-07-24 04:10:59,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=42, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:10:59,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=42 2023-07-24 04:10:59,423 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171859423"}]},"ts":"1690171859423"} 2023-07-24 04:10:59,425 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLING in hbase:meta 2023-07-24 04:10:59,427 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCreateMultiRegion to state=DISABLING 2023-07-24 04:10:59,432 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5e996bc4a1d77d54cbb649199a886305, UNASSIGN}, {pid=44, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3c51243ece3758b803926a5484389e34, UNASSIGN}, {pid=45, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c16118052a59d7211967c6fd0222da36, UNASSIGN}, {pid=46, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=16ae5bf2f9e3704e74a83b04208d4f20, UNASSIGN}, {pid=47, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e871ccee14e30454c27e7760f7695695, UNASSIGN}, {pid=48, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=49a050ad90a5d5c994e650d5d4c306c4, UNASSIGN}, {pid=49, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cd26f163dcad406136eb9bddca0f6810, UNASSIGN}, {pid=50, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=60834f7a98afcda3b9e986ee2c6f382b, UNASSIGN}, {pid=51, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=33866eca882741fef69f1046d9617b5d, UNASSIGN}, {pid=52, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aeb78a3db3e6fd257ad80f2e5e0add6e, UNASSIGN}] 2023-07-24 04:10:59,434 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=33866eca882741fef69f1046d9617b5d, UNASSIGN 2023-07-24 04:10:59,434 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cd26f163dcad406136eb9bddca0f6810, UNASSIGN 2023-07-24 04:10:59,434 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aeb78a3db3e6fd257ad80f2e5e0add6e, UNASSIGN 2023-07-24 04:10:59,435 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=60834f7a98afcda3b9e986ee2c6f382b, UNASSIGN 2023-07-24 04:10:59,435 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=49a050ad90a5d5c994e650d5d4c306c4, UNASSIGN 2023-07-24 04:10:59,436 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=33866eca882741fef69f1046d9617b5d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:59,436 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690171858271.33866eca882741fef69f1046d9617b5d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690171859436"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171859436"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171859436"}]},"ts":"1690171859436"} 2023-07-24 04:10:59,437 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=cd26f163dcad406136eb9bddca0f6810, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:59,437 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=aeb78a3db3e6fd257ad80f2e5e0add6e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:59,437 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859437"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171859437"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171859437"}]},"ts":"1690171859437"} 2023-07-24 04:10:59,437 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690171859437"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171859437"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171859437"}]},"ts":"1690171859437"} 2023-07-24 04:10:59,437 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=60834f7a98afcda3b9e986ee2c6f382b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:59,437 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859437"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171859437"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171859437"}]},"ts":"1690171859437"} 2023-07-24 04:10:59,438 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=49a050ad90a5d5c994e650d5d4c306c4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:10:59,438 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859437"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171859437"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171859437"}]},"ts":"1690171859437"} 2023-07-24 04:10:59,439 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=51, state=RUNNABLE; CloseRegionProcedure 33866eca882741fef69f1046d9617b5d, server=jenkins-hbase4.apache.org,41157,1690171852333}] 2023-07-24 04:10:59,441 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=49, state=RUNNABLE; CloseRegionProcedure cd26f163dcad406136eb9bddca0f6810, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:10:59,443 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=52, state=RUNNABLE; CloseRegionProcedure aeb78a3db3e6fd257ad80f2e5e0add6e, server=jenkins-hbase4.apache.org,43785,1690171856375}] 2023-07-24 04:10:59,445 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=50, state=RUNNABLE; CloseRegionProcedure 60834f7a98afcda3b9e986ee2c6f382b, server=jenkins-hbase4.apache.org,39717,1690171855814}] 2023-07-24 04:10:59,446 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=48, state=RUNNABLE; CloseRegionProcedure 49a050ad90a5d5c994e650d5d4c306c4, server=jenkins-hbase4.apache.org,43785,1690171856375}] 2023-07-24 04:10:59,447 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e871ccee14e30454c27e7760f7695695, UNASSIGN 2023-07-24 04:10:59,449 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=16ae5bf2f9e3704e74a83b04208d4f20, UNASSIGN 2023-07-24 04:10:59,450 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=16ae5bf2f9e3704e74a83b04208d4f20, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:59,450 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=e871ccee14e30454c27e7760f7695695, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:59,450 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859450"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171859450"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171859450"}]},"ts":"1690171859450"} 2023-07-24 04:10:59,450 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690171858271.e871ccee14e30454c27e7760f7695695.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859450"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171859450"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171859450"}]},"ts":"1690171859450"} 2023-07-24 04:10:59,452 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c16118052a59d7211967c6fd0222da36, UNASSIGN 2023-07-24 04:10:59,453 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3c51243ece3758b803926a5484389e34, UNASSIGN 2023-07-24 04:10:59,453 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=c16118052a59d7211967c6fd0222da36, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:10:59,453 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=42, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5e996bc4a1d77d54cbb649199a886305, UNASSIGN 2023-07-24 04:10:59,454 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859453"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171859453"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171859453"}]},"ts":"1690171859453"} 2023-07-24 04:10:59,455 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=46, state=RUNNABLE; CloseRegionProcedure 16ae5bf2f9e3704e74a83b04208d4f20, server=jenkins-hbase4.apache.org,39717,1690171855814}] 2023-07-24 04:10:59,456 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=47, state=RUNNABLE; CloseRegionProcedure e871ccee14e30454c27e7760f7695695, server=jenkins-hbase4.apache.org,41157,1690171852333}] 2023-07-24 04:10:59,456 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=5e996bc4a1d77d54cbb649199a886305, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:10:59,456 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=3c51243ece3758b803926a5484389e34, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:10:59,456 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859456"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171859456"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171859456"}]},"ts":"1690171859456"} 2023-07-24 04:10:59,456 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690171858271.3c51243ece3758b803926a5484389e34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859456"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171859456"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171859456"}]},"ts":"1690171859456"} 2023-07-24 04:10:59,457 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=45, state=RUNNABLE; CloseRegionProcedure c16118052a59d7211967c6fd0222da36, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:10:59,459 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=43, state=RUNNABLE; CloseRegionProcedure 5e996bc4a1d77d54cbb649199a886305, server=jenkins-hbase4.apache.org,39717,1690171855814}] 2023-07-24 04:10:59,461 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=44, state=RUNNABLE; CloseRegionProcedure 3c51243ece3758b803926a5484389e34, server=jenkins-hbase4.apache.org,41157,1690171852333}] 2023-07-24 04:10:59,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=42 2023-07-24 04:10:59,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3c51243ece3758b803926a5484389e34 2023-07-24 04:10:59,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3c51243ece3758b803926a5484389e34, disabling compactions & flushes 2023-07-24 04:10:59,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. 2023-07-24 04:10:59,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. 2023-07-24 04:10:59,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. after waiting 0 ms 2023-07-24 04:10:59,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. 2023-07-24 04:10:59,598 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:10:59,599 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cd26f163dcad406136eb9bddca0f6810, disabling compactions & flushes 2023-07-24 04:10:59,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. 2023-07-24 04:10:59,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:10:59,599 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. 2023-07-24 04:10:59,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. after waiting 0 ms 2023-07-24 04:10:59,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. 2023-07-24 04:10:59,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:10:59,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aeb78a3db3e6fd257ad80f2e5e0add6e, disabling compactions & flushes 2023-07-24 04:10:59,610 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. 2023-07-24 04:10:59,610 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. 2023-07-24 04:10:59,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. after waiting 0 ms 2023-07-24 04:10:59,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. 2023-07-24 04:10:59,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 60834f7a98afcda3b9e986ee2c6f382b, disabling compactions & flushes 2023-07-24 04:10:59,632 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. 2023-07-24 04:10:59,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. 2023-07-24 04:10:59,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. after waiting 0 ms 2023-07-24 04:10:59,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. 2023-07-24 04:10:59,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:10:59,650 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34. 2023-07-24 04:10:59,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3c51243ece3758b803926a5484389e34: 2023-07-24 04:10:59,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3c51243ece3758b803926a5484389e34 2023-07-24 04:10:59,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 33866eca882741fef69f1046d9617b5d 2023-07-24 04:10:59,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 33866eca882741fef69f1046d9617b5d, disabling compactions & flushes 2023-07-24 04:10:59,661 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. 2023-07-24 04:10:59,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. 2023-07-24 04:10:59,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. after waiting 0 ms 2023-07-24 04:10:59,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. 2023-07-24 04:10:59,675 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=3c51243ece3758b803926a5484389e34, regionState=CLOSED 2023-07-24 04:10:59,675 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690171858271.3c51243ece3758b803926a5484389e34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859675"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171859675"}]},"ts":"1690171859675"} 2023-07-24 04:10:59,682 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=44 2023-07-24 04:10:59,682 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=44, state=SUCCESS; CloseRegionProcedure 3c51243ece3758b803926a5484389e34, server=jenkins-hbase4.apache.org,41157,1690171852333 in 218 msec 2023-07-24 04:10:59,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:10:59,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:10:59,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:10:59,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:10:59,689 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810. 2023-07-24 04:10:59,689 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e. 2023-07-24 04:10:59,689 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cd26f163dcad406136eb9bddca0f6810: 2023-07-24 04:10:59,689 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aeb78a3db3e6fd257ad80f2e5e0add6e: 2023-07-24 04:10:59,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d. 2023-07-24 04:10:59,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b. 2023-07-24 04:10:59,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 33866eca882741fef69f1046d9617b5d: 2023-07-24 04:10:59,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:10:59,692 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=42, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3c51243ece3758b803926a5484389e34, UNASSIGN in 253 msec 2023-07-24 04:10:59,694 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=cd26f163dcad406136eb9bddca0f6810, regionState=CLOSED 2023-07-24 04:10:59,694 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c16118052a59d7211967c6fd0222da36 2023-07-24 04:10:59,695 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859694"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171859694"}]},"ts":"1690171859694"} 2023-07-24 04:10:59,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 60834f7a98afcda3b9e986ee2c6f382b: 2023-07-24 04:10:59,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c16118052a59d7211967c6fd0222da36, disabling compactions & flushes 2023-07-24 04:10:59,696 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. 2023-07-24 04:10:59,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. 2023-07-24 04:10:59,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. after waiting 0 ms 2023-07-24 04:10:59,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. 2023-07-24 04:10:59,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:10:59,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:10:59,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:10:59,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36. 2023-07-24 04:10:59,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c16118052a59d7211967c6fd0222da36: 2023-07-24 04:10:59,706 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=aeb78a3db3e6fd257ad80f2e5e0add6e, regionState=CLOSED 2023-07-24 04:10:59,707 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690171859706"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171859706"}]},"ts":"1690171859706"} 2023-07-24 04:10:59,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 33866eca882741fef69f1046d9617b5d 2023-07-24 04:10:59,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e871ccee14e30454c27e7760f7695695 2023-07-24 04:10:59,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 49a050ad90a5d5c994e650d5d4c306c4, disabling compactions & flushes 2023-07-24 04:10:59,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e871ccee14e30454c27e7760f7695695, disabling compactions & flushes 2023-07-24 04:10:59,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. 2023-07-24 04:10:59,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. 2023-07-24 04:10:59,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. 2023-07-24 04:10:59,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. after waiting 0 ms 2023-07-24 04:10:59,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. 2023-07-24 04:10:59,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. after waiting 0 ms 2023-07-24 04:10:59,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. 2023-07-24 04:10:59,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. 2023-07-24 04:10:59,715 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=33866eca882741fef69f1046d9617b5d, regionState=CLOSED 2023-07-24 04:10:59,715 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690171858271.33866eca882741fef69f1046d9617b5d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690171859715"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171859715"}]},"ts":"1690171859715"} 2023-07-24 04:10:59,721 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:10:59,721 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:10:59,723 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5e996bc4a1d77d54cbb649199a886305, disabling compactions & flushes 2023-07-24 04:10:59,723 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. 2023-07-24 04:10:59,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. 2023-07-24 04:10:59,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. after waiting 0 ms 2023-07-24 04:10:59,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. 2023-07-24 04:10:59,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:10:59,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4. 2023-07-24 04:10:59,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 49a050ad90a5d5c994e650d5d4c306c4: 2023-07-24 04:10:59,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:10:59,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=42 2023-07-24 04:10:59,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305. 2023-07-24 04:10:59,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5e996bc4a1d77d54cbb649199a886305: 2023-07-24 04:10:59,740 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=60834f7a98afcda3b9e986ee2c6f382b, regionState=CLOSED 2023-07-24 04:10:59,741 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=49 2023-07-24 04:10:59,741 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; CloseRegionProcedure cd26f163dcad406136eb9bddca0f6810, server=jenkins-hbase4.apache.org,37679,1690171852273 in 265 msec 2023-07-24 04:10:59,742 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859740"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171859740"}]},"ts":"1690171859740"} 2023-07-24 04:10:59,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:10:59,746 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695. 2023-07-24 04:10:59,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e871ccee14e30454c27e7760f7695695: 2023-07-24 04:10:59,747 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c16118052a59d7211967c6fd0222da36 2023-07-24 04:10:59,747 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=52 2023-07-24 04:10:59,747 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=c16118052a59d7211967c6fd0222da36, regionState=CLOSED 2023-07-24 04:10:59,747 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; CloseRegionProcedure aeb78a3db3e6fd257ad80f2e5e0add6e, server=jenkins-hbase4.apache.org,43785,1690171856375 in 268 msec 2023-07-24 04:10:59,747 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859747"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171859747"}]},"ts":"1690171859747"} 2023-07-24 04:10:59,748 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:10:59,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=51 2023-07-24 04:10:59,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=51, state=SUCCESS; CloseRegionProcedure 33866eca882741fef69f1046d9617b5d, server=jenkins-hbase4.apache.org,41157,1690171852333 in 289 msec 2023-07-24 04:10:59,751 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=42, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cd26f163dcad406136eb9bddca0f6810, UNASSIGN in 310 msec 2023-07-24 04:10:59,752 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=42, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=aeb78a3db3e6fd257ad80f2e5e0add6e, UNASSIGN in 316 msec 2023-07-24 04:10:59,753 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:10:59,753 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:10:59,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 16ae5bf2f9e3704e74a83b04208d4f20, disabling compactions & flushes 2023-07-24 04:10:59,754 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. 2023-07-24 04:10:59,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. 2023-07-24 04:10:59,754 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=49a050ad90a5d5c994e650d5d4c306c4, regionState=CLOSED 2023-07-24 04:10:59,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. after waiting 0 ms 2023-07-24 04:10:59,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. 2023-07-24 04:10:59,754 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859754"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171859754"}]},"ts":"1690171859754"} 2023-07-24 04:10:59,754 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e871ccee14e30454c27e7760f7695695 2023-07-24 04:10:59,762 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=42, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=33866eca882741fef69f1046d9617b5d, UNASSIGN in 319 msec 2023-07-24 04:10:59,762 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=5e996bc4a1d77d54cbb649199a886305, regionState=CLOSED 2023-07-24 04:10:59,763 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859762"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171859762"}]},"ts":"1690171859762"} 2023-07-24 04:10:59,763 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=45 2023-07-24 04:10:59,763 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=45, state=SUCCESS; CloseRegionProcedure c16118052a59d7211967c6fd0222da36, server=jenkins-hbase4.apache.org,37679,1690171852273 in 293 msec 2023-07-24 04:10:59,765 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=50 2023-07-24 04:10:59,765 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=e871ccee14e30454c27e7760f7695695, regionState=CLOSED 2023-07-24 04:10:59,765 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=50, state=SUCCESS; CloseRegionProcedure 60834f7a98afcda3b9e986ee2c6f382b, server=jenkins-hbase4.apache.org,39717,1690171855814 in 303 msec 2023-07-24 04:10:59,765 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690171858271.e871ccee14e30454c27e7760f7695695.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859765"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171859765"}]},"ts":"1690171859765"} 2023-07-24 04:10:59,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:10:59,773 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20. 2023-07-24 04:10:59,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 16ae5bf2f9e3704e74a83b04208d4f20: 2023-07-24 04:10:59,775 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=42, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=c16118052a59d7211967c6fd0222da36, UNASSIGN in 334 msec 2023-07-24 04:10:59,775 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=42, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=60834f7a98afcda3b9e986ee2c6f382b, UNASSIGN in 334 msec 2023-07-24 04:10:59,776 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:10:59,777 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=48 2023-07-24 04:10:59,777 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=48, state=SUCCESS; CloseRegionProcedure 49a050ad90a5d5c994e650d5d4c306c4, server=jenkins-hbase4.apache.org,43785,1690171856375 in 317 msec 2023-07-24 04:10:59,778 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=16ae5bf2f9e3704e74a83b04208d4f20, regionState=CLOSED 2023-07-24 04:10:59,778 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171859778"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171859778"}]},"ts":"1690171859778"} 2023-07-24 04:10:59,778 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=43 2023-07-24 04:10:59,779 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=43, state=SUCCESS; CloseRegionProcedure 5e996bc4a1d77d54cbb649199a886305, server=jenkins-hbase4.apache.org,39717,1690171855814 in 307 msec 2023-07-24 04:10:59,781 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=47 2023-07-24 04:10:59,781 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=42, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=49a050ad90a5d5c994e650d5d4c306c4, UNASSIGN in 346 msec 2023-07-24 04:10:59,781 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=47, state=SUCCESS; CloseRegionProcedure e871ccee14e30454c27e7760f7695695, server=jenkins-hbase4.apache.org,41157,1690171852333 in 319 msec 2023-07-24 04:10:59,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=42, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5e996bc4a1d77d54cbb649199a886305, UNASSIGN in 350 msec 2023-07-24 04:10:59,786 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=42, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e871ccee14e30454c27e7760f7695695, UNASSIGN in 352 msec 2023-07-24 04:10:59,787 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=46 2023-07-24 04:10:59,787 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=46, state=SUCCESS; CloseRegionProcedure 16ae5bf2f9e3704e74a83b04208d4f20, server=jenkins-hbase4.apache.org,39717,1690171855814 in 325 msec 2023-07-24 04:10:59,790 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=42 2023-07-24 04:10:59,790 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=42, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=16ae5bf2f9e3704e74a83b04208d4f20, UNASSIGN in 358 msec 2023-07-24 04:10:59,791 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171859791"}]},"ts":"1690171859791"} 2023-07-24 04:10:59,793 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLED in hbase:meta 2023-07-24 04:10:59,796 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testCreateMultiRegion to state=DISABLED 2023-07-24 04:10:59,798 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=42, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion in 395 msec 2023-07-24 04:11:00,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=42 2023-07-24 04:11:00,036 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateMultiRegion, procId: 42 completed 2023-07-24 04:11:00,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCreateMultiRegion 2023-07-24 04:11:00,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:11:00,042 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=63, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:11:00,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateMultiRegion' from rsgroup 'default' 2023-07-24 04:11:00,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:00,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:00,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:00,047 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=63, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:11:00,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-24 04:11:00,063 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:11:00,063 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:11:00,063 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:11:00,063 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:11:00,063 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695 2023-07-24 04:11:00,063 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:11:00,063 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36 2023-07-24 04:11:00,063 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34 2023-07-24 04:11:00,072 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305/recovered.edits] 2023-07-24 04:11:00,073 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20/recovered.edits] 2023-07-24 04:11:00,073 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695/recovered.edits] 2023-07-24 04:11:00,073 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810/recovered.edits] 2023-07-24 04:11:00,074 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b/recovered.edits] 2023-07-24 04:11:00,074 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36/recovered.edits] 2023-07-24 04:11:00,074 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34/recovered.edits] 2023-07-24 04:11:00,075 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4/recovered.edits] 2023-07-24 04:11:00,091 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20/recovered.edits/4.seqid 2023-07-24 04:11:00,091 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810/recovered.edits/4.seqid 2023-07-24 04:11:00,091 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695/recovered.edits/4.seqid 2023-07-24 04:11:00,091 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b/recovered.edits/4.seqid 2023-07-24 04:11:00,091 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305/recovered.edits/4.seqid 2023-07-24 04:11:00,092 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/16ae5bf2f9e3704e74a83b04208d4f20 2023-07-24 04:11:00,092 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d 2023-07-24 04:11:00,093 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34/recovered.edits/4.seqid 2023-07-24 04:11:00,093 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/e871ccee14e30454c27e7760f7695695 2023-07-24 04:11:00,093 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:11:00,094 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/cd26f163dcad406136eb9bddca0f6810 2023-07-24 04:11:00,095 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/60834f7a98afcda3b9e986ee2c6f382b 2023-07-24 04:11:00,095 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/3c51243ece3758b803926a5484389e34 2023-07-24 04:11:00,095 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36/recovered.edits/4.seqid 2023-07-24 04:11:00,095 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/5e996bc4a1d77d54cbb649199a886305 2023-07-24 04:11:00,096 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/c16118052a59d7211967c6fd0222da36 2023-07-24 04:11:00,096 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4/recovered.edits/4.seqid 2023-07-24 04:11:00,097 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/49a050ad90a5d5c994e650d5d4c306c4 2023-07-24 04:11:00,097 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d/recovered.edits] 2023-07-24 04:11:00,098 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e/recovered.edits] 2023-07-24 04:11:00,105 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d/recovered.edits/4.seqid 2023-07-24 04:11:00,105 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e/recovered.edits/4.seqid 2023-07-24 04:11:00,106 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/33866eca882741fef69f1046d9617b5d 2023-07-24 04:11:00,106 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateMultiRegion/aeb78a3db3e6fd257ad80f2e5e0add6e 2023-07-24 04:11:00,106 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-24 04:11:00,110 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=63, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:11:00,114 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 10 rows of Group_testCreateMultiRegion from hbase:meta 2023-07-24 04:11:00,117 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateMultiRegion' descriptor. 2023-07-24 04:11:00,119 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=63, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:11:00,119 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateMultiRegion' from region states. 2023-07-24 04:11:00,119 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171860119"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:00,120 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690171858271.3c51243ece3758b803926a5484389e34.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171860119"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:00,120 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171860119"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:00,120 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171860119"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:00,120 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690171858271.e871ccee14e30454c27e7760f7695695.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171860119"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:00,120 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171860119"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:00,120 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171860119"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:00,120 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171860119"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:00,120 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690171858271.33866eca882741fef69f1046d9617b5d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171860119"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:00,120 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171860119"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:00,123 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 10 regions from META 2023-07-24 04:11:00,123 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5e996bc4a1d77d54cbb649199a886305, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690171858271.5e996bc4a1d77d54cbb649199a886305.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, {ENCODED => 3c51243ece3758b803926a5484389e34, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1690171858271.3c51243ece3758b803926a5484389e34.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, {ENCODED => c16118052a59d7211967c6fd0222da36, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1690171858271.c16118052a59d7211967c6fd0222da36.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, {ENCODED => 16ae5bf2f9e3704e74a83b04208d4f20, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1690171858271.16ae5bf2f9e3704e74a83b04208d4f20.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, {ENCODED => e871ccee14e30454c27e7760f7695695, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690171858271.e871ccee14e30454c27e7760f7695695.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, {ENCODED => 49a050ad90a5d5c994e650d5d4c306c4, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690171858271.49a050ad90a5d5c994e650d5d4c306c4.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, {ENCODED => cd26f163dcad406136eb9bddca0f6810, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690171858271.cd26f163dcad406136eb9bddca0f6810.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, {ENCODED => 60834f7a98afcda3b9e986ee2c6f382b, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690171858271.60834f7a98afcda3b9e986ee2c6f382b.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, {ENCODED => 33866eca882741fef69f1046d9617b5d, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690171858271.33866eca882741fef69f1046d9617b5d.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, {ENCODED => aeb78a3db3e6fd257ad80f2e5e0add6e, NAME => 'Group_testCreateMultiRegion,,1690171858271.aeb78a3db3e6fd257ad80f2e5e0add6e.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}] 2023-07-24 04:11:00,124 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateMultiRegion' as deleted. 2023-07-24 04:11:00,124 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690171860124"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:00,126 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateMultiRegion state from META 2023-07-24 04:11:00,128 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=63, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:11:00,130 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion in 90 msec 2023-07-24 04:11:00,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-24 04:11:00,150 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateMultiRegion, procId: 63 completed 2023-07-24 04:11:00,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:00,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:00,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:00,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:00,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:00,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:00,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:00,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:00,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:00,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:00,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:00,173 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:00,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:00,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:00,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:00,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:00,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:00,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:00,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:00,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:11:00,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:00,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 248 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173060187, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:11:00,188 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:00,191 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:00,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:00,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:00,193 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:00,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:00,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:00,214 INFO [Listener at localhost/41307] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=500 (was 497) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1555149928_17 at /127.0.0.1:46988 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18426628-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1383649379_17 at /127.0.0.1:37224 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-683287797_17 at /127.0.0.1:40382 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe88b18-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=781 (was 750) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=538 (was 538), ProcessCount=176 (was 176), AvailableMemoryMB=6218 (was 6250) 2023-07-24 04:11:00,229 INFO [Listener at localhost/41307] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=500, OpenFileDescriptor=781, MaxFileDescriptor=60000, SystemLoadAverage=538, ProcessCount=176, AvailableMemoryMB=6217 2023-07-24 04:11:00,230 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(132): testNamespaceCreateAndAssign 2023-07-24 04:11:00,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:00,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:00,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:00,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:00,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:00,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:00,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:00,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:00,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:00,244 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 04:11:00,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:00,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:00,249 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:00,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:00,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:00,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:00,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:00,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:00,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:00,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:00,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:11:00,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:00,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 276 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173060265, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:11:00,266 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:00,268 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:00,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:00,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:00,270 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:00,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:00,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:00,272 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBasics(118): testNamespaceCreateAndAssign 2023-07-24 04:11:00,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:00,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:00,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup appInfo 2023-07-24 04:11:00,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:00,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:00,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 04:11:00,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:00,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:00,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:00,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:00,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37679] to rsgroup appInfo 2023-07-24 04:11:00,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:00,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:00,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 04:11:00,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:00,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup appInfo 2023-07-24 04:11:00,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:11:00,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:11:00,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:11:00,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:11:00,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:11:00,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=64, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 04:11:00,303 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 04:11:00,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 04:11:00,304 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37679,1690171852273, state=CLOSING 2023-07-24 04:11:00,306 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 04:11:00,306 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 04:11:00,306 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=64, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:11:00,324 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 04:11:00,326 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 04:11:00,326 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 04:11:00,452 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 04:11:00,452 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-24 04:11:00,453 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 04:11:00,453 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-24 04:11:00,453 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 04:11:00,453 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-24 04:11:00,460 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-24 04:11:00,461 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 04:11:00,461 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 04:11:00,461 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 04:11:00,461 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 04:11:00,461 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 04:11:00,462 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=31.09 KB heapSize=49.85 KB 2023-07-24 04:11:00,556 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=28.03 KB at sequenceid=74 (bloomFilter=false), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/info/ea6a294b028040dcb802cfd24f5c7162 2023-07-24 04:11:00,592 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ea6a294b028040dcb802cfd24f5c7162 2023-07-24 04:11:00,628 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.19 KB at sequenceid=74 (bloomFilter=false), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/rep_barrier/354c8876ee08418994b55326872ce722 2023-07-24 04:11:00,635 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 354c8876ee08418994b55326872ce722 2023-07-24 04:11:00,653 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.87 KB at sequenceid=74 (bloomFilter=false), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/table/e673da21eba54a61b6fc1007d80762bf 2023-07-24 04:11:00,660 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e673da21eba54a61b6fc1007d80762bf 2023-07-24 04:11:00,662 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/info/ea6a294b028040dcb802cfd24f5c7162 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/ea6a294b028040dcb802cfd24f5c7162 2023-07-24 04:11:00,672 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ea6a294b028040dcb802cfd24f5c7162 2023-07-24 04:11:00,673 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/ea6a294b028040dcb802cfd24f5c7162, entries=42, sequenceid=74, filesize=9.5 K 2023-07-24 04:11:00,675 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/rep_barrier/354c8876ee08418994b55326872ce722 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier/354c8876ee08418994b55326872ce722 2023-07-24 04:11:00,686 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 354c8876ee08418994b55326872ce722 2023-07-24 04:11:00,687 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier/354c8876ee08418994b55326872ce722, entries=11, sequenceid=74, filesize=6.1 K 2023-07-24 04:11:00,688 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/table/e673da21eba54a61b6fc1007d80762bf as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/e673da21eba54a61b6fc1007d80762bf 2023-07-24 04:11:00,697 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e673da21eba54a61b6fc1007d80762bf 2023-07-24 04:11:00,698 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/e673da21eba54a61b6fc1007d80762bf, entries=17, sequenceid=74, filesize=6.2 K 2023-07-24 04:11:00,699 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~31.09 KB/31839, heapSize ~49.80 KB/51000, currentSize=0 B/0 for 1588230740 in 237ms, sequenceid=74, compaction requested=false 2023-07-24 04:11:00,719 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/recovered.edits/77.seqid, newMaxSeqId=77, maxSeqId=1 2023-07-24 04:11:00,720 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 04:11:00,721 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 04:11:00,722 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 04:11:00,722 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,39717,1690171855814 record at close sequenceid=74 2023-07-24 04:11:00,724 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-24 04:11:00,725 WARN [PEWorker-4] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-24 04:11:00,727 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=64 2023-07-24 04:11:00,727 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=64, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37679,1690171852273 in 419 msec 2023-07-24 04:11:00,728 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=64, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39717,1690171855814; forceNewPlan=false, retain=false 2023-07-24 04:11:00,878 INFO [jenkins-hbase4:36883] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 04:11:00,879 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39717,1690171855814, state=OPENING 2023-07-24 04:11:00,880 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 04:11:00,880 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 04:11:00,880 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=64, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39717,1690171855814}] 2023-07-24 04:11:01,037 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 04:11:01,037 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:01,039 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39717%2C1690171855814.meta, suffix=.meta, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,39717,1690171855814, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:11:01,059 DEBUG [RS-EventLoopGroup-8-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:11:01,059 DEBUG [RS-EventLoopGroup-8-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:11:01,060 DEBUG [RS-EventLoopGroup-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:11:01,073 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,39717,1690171855814/jenkins-hbase4.apache.org%2C39717%2C1690171855814.meta.1690171861041.meta 2023-07-24 04:11:01,074 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK]] 2023-07-24 04:11:01,075 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:01,075 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 04:11:01,075 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 04:11:01,075 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 04:11:01,076 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 04:11:01,076 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:01,076 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 04:11:01,076 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 04:11:01,079 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 04:11:01,080 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info 2023-07-24 04:11:01,080 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info 2023-07-24 04:11:01,080 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 04:11:01,091 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ea6a294b028040dcb802cfd24f5c7162 2023-07-24 04:11:01,091 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/ea6a294b028040dcb802cfd24f5c7162 2023-07-24 04:11:01,092 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:01,092 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 04:11:01,093 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier 2023-07-24 04:11:01,093 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier 2023-07-24 04:11:01,094 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 04:11:01,106 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 354c8876ee08418994b55326872ce722 2023-07-24 04:11:01,106 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier/354c8876ee08418994b55326872ce722 2023-07-24 04:11:01,107 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:01,107 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 04:11:01,108 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table 2023-07-24 04:11:01,108 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table 2023-07-24 04:11:01,109 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 04:11:01,121 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e673da21eba54a61b6fc1007d80762bf 2023-07-24 04:11:01,122 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/e673da21eba54a61b6fc1007d80762bf 2023-07-24 04:11:01,122 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:01,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740 2023-07-24 04:11:01,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740 2023-07-24 04:11:01,127 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 04:11:01,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 04:11:01,129 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=78; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10580171360, jitterRate=-0.01464475691318512}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 04:11:01,129 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 04:11:01,130 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=66, masterSystemTime=1690171861032 2023-07-24 04:11:01,132 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 04:11:01,132 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 04:11:01,132 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39717,1690171855814, state=OPEN 2023-07-24 04:11:01,135 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 04:11:01,135 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 04:11:01,137 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=64 2023-07-24 04:11:01,137 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=64, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39717,1690171855814 in 255 msec 2023-07-24 04:11:01,139 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 836 msec 2023-07-24 04:11:01,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure.ProcedureSyncWait(216): waitFor pid=64 2023-07-24 04:11:01,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37679,1690171852273] are moved back to default 2023-07-24 04:11:01,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-24 04:11:01,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:01,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:01,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:01,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=appInfo 2023-07-24 04:11:01,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:01,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'appInfo'} 2023-07-24 04:11:01,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=67, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-24 04:11:01,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=67 2023-07-24 04:11:01,326 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 04:11:01,329 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 11 msec 2023-07-24 04:11:01,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=67 2023-07-24 04:11:01,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 04:11:01,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=68, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:01,430 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=68, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 04:11:01,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "Group_foo" qualifier: "Group_testCreateAndAssign" procId is: 68 2023-07-24 04:11:01,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=68 2023-07-24 04:11:01,432 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:01,433 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:01,433 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 04:11:01,434 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:01,437 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=68, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 04:11:01,438 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37679] ipc.CallRunner(144): callId: 173 service: ClientService methodName: Get size: 153 connection: 172.31.14.131:38430 deadline: 1690171921438, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=39717 startCode=1690171855814. As of locationSeqNum=74. 2023-07-24 04:11:01,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=68 2023-07-24 04:11:01,540 DEBUG [PEWorker-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:01,541 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60364, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:11:01,546 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:01,547 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4 empty. 2023-07-24 04:11:01,548 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:01,548 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-24 04:11:01,573 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_foo/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-24 04:11:01,575 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4688786c9b7154d918533dbd1be188d4, NAME => 'Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:11:01,588 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:01,588 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing 4688786c9b7154d918533dbd1be188d4, disabling compactions & flushes 2023-07-24 04:11:01,588 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. 2023-07-24 04:11:01,588 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. 2023-07-24 04:11:01,588 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. after waiting 0 ms 2023-07-24 04:11:01,588 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. 2023-07-24 04:11:01,588 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. 2023-07-24 04:11:01,588 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for 4688786c9b7154d918533dbd1be188d4: 2023-07-24 04:11:01,591 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=68, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 04:11:01,592 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690171861592"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171861592"}]},"ts":"1690171861592"} 2023-07-24 04:11:01,594 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 04:11:01,595 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=68, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 04:11:01,595 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171861595"}]},"ts":"1690171861595"} 2023-07-24 04:11:01,597 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-24 04:11:01,601 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=68, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=4688786c9b7154d918533dbd1be188d4, ASSIGN}] 2023-07-24 04:11:01,603 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=68, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=4688786c9b7154d918533dbd1be188d4, ASSIGN 2023-07-24 04:11:01,604 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=69, ppid=68, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=4688786c9b7154d918533dbd1be188d4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37679,1690171852273; forceNewPlan=false, retain=false 2023-07-24 04:11:01,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=68 2023-07-24 04:11:01,756 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=4688786c9b7154d918533dbd1be188d4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:01,756 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690171861756"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171861756"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171861756"}]},"ts":"1690171861756"} 2023-07-24 04:11:01,758 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=69, state=RUNNABLE; OpenRegionProcedure 4688786c9b7154d918533dbd1be188d4, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:11:01,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. 2023-07-24 04:11:01,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4688786c9b7154d918533dbd1be188d4, NAME => 'Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:01,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign 4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:01,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:01,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:01,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:01,917 INFO [StoreOpener-4688786c9b7154d918533dbd1be188d4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:01,919 DEBUG [StoreOpener-4688786c9b7154d918533dbd1be188d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4/f 2023-07-24 04:11:01,919 DEBUG [StoreOpener-4688786c9b7154d918533dbd1be188d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4/f 2023-07-24 04:11:01,919 INFO [StoreOpener-4688786c9b7154d918533dbd1be188d4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4688786c9b7154d918533dbd1be188d4 columnFamilyName f 2023-07-24 04:11:01,920 INFO [StoreOpener-4688786c9b7154d918533dbd1be188d4-1] regionserver.HStore(310): Store=4688786c9b7154d918533dbd1be188d4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:01,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:01,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:01,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:01,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:11:01,928 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4688786c9b7154d918533dbd1be188d4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9767550720, jitterRate=-0.09032595157623291}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:11:01,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4688786c9b7154d918533dbd1be188d4: 2023-07-24 04:11:01,932 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4., pid=70, masterSystemTime=1690171861910 2023-07-24 04:11:01,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. 2023-07-24 04:11:01,934 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. 2023-07-24 04:11:01,935 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=4688786c9b7154d918533dbd1be188d4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:01,935 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690171861934"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171861934"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171861934"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171861934"}]},"ts":"1690171861934"} 2023-07-24 04:11:01,940 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=69 2023-07-24 04:11:01,940 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=69, state=SUCCESS; OpenRegionProcedure 4688786c9b7154d918533dbd1be188d4, server=jenkins-hbase4.apache.org,37679,1690171852273 in 179 msec 2023-07-24 04:11:01,942 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=68 2023-07-24 04:11:01,943 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=68, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=4688786c9b7154d918533dbd1be188d4, ASSIGN in 339 msec 2023-07-24 04:11:01,943 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=68, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 04:11:01,944 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171861943"}]},"ts":"1690171861943"} 2023-07-24 04:11:01,945 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-24 04:11:01,948 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=68, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 04:11:01,949 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=68, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign in 522 msec 2023-07-24 04:11:02,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=68 2023-07-24 04:11:02,036 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 68 completed 2023-07-24 04:11:02,036 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:02,042 INFO [Listener at localhost/41307] client.HBaseAdmin$15(890): Started disable of Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:02,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:02,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:02,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-24 04:11:02,051 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171862051"}]},"ts":"1690171862051"} 2023-07-24 04:11:02,052 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-24 04:11:02,055 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_foo:Group_testCreateAndAssign to state=DISABLING 2023-07-24 04:11:02,056 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=71, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=4688786c9b7154d918533dbd1be188d4, UNASSIGN}] 2023-07-24 04:11:02,058 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, ppid=71, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=4688786c9b7154d918533dbd1be188d4, UNASSIGN 2023-07-24 04:11:02,059 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=4688786c9b7154d918533dbd1be188d4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:02,059 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690171862059"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171862059"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171862059"}]},"ts":"1690171862059"} 2023-07-24 04:11:02,061 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=72, state=RUNNABLE; CloseRegionProcedure 4688786c9b7154d918533dbd1be188d4, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:11:02,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-24 04:11:02,214 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:02,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4688786c9b7154d918533dbd1be188d4, disabling compactions & flushes 2023-07-24 04:11:02,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. 2023-07-24 04:11:02,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. 2023-07-24 04:11:02,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. after waiting 0 ms 2023-07-24 04:11:02,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. 2023-07-24 04:11:02,220 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:11:02,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4. 2023-07-24 04:11:02,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4688786c9b7154d918533dbd1be188d4: 2023-07-24 04:11:02,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:02,227 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=4688786c9b7154d918533dbd1be188d4, regionState=CLOSED 2023-07-24 04:11:02,228 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690171862227"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171862227"}]},"ts":"1690171862227"} 2023-07-24 04:11:02,233 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=72 2023-07-24 04:11:02,233 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=72, state=SUCCESS; CloseRegionProcedure 4688786c9b7154d918533dbd1be188d4, server=jenkins-hbase4.apache.org,37679,1690171852273 in 169 msec 2023-07-24 04:11:02,237 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=71 2023-07-24 04:11:02,237 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=71, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=4688786c9b7154d918533dbd1be188d4, UNASSIGN in 177 msec 2023-07-24 04:11:02,238 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171862238"}]},"ts":"1690171862238"} 2023-07-24 04:11:02,240 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-24 04:11:02,242 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_foo:Group_testCreateAndAssign to state=DISABLED 2023-07-24 04:11:02,245 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign in 201 msec 2023-07-24 04:11:02,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-24 04:11:02,350 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 71 completed 2023-07-24 04:11:02,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:02,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:02,361 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:02,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_foo:Group_testCreateAndAssign' from rsgroup 'appInfo' 2023-07-24 04:11:02,362 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:02,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:02,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:02,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 04:11:02,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:02,367 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:02,370 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4/recovered.edits] 2023-07-24 04:11:02,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-24 04:11:02,378 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4/recovered.edits/4.seqid 2023-07-24 04:11:02,378 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_foo/Group_testCreateAndAssign/4688786c9b7154d918533dbd1be188d4 2023-07-24 04:11:02,378 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-24 04:11:02,382 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:02,384 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_foo:Group_testCreateAndAssign from hbase:meta 2023-07-24 04:11:02,387 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_foo:Group_testCreateAndAssign' descriptor. 2023-07-24 04:11:02,388 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:02,388 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_foo:Group_testCreateAndAssign' from region states. 2023-07-24 04:11:02,388 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171862388"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:02,392 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 04:11:02,392 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 4688786c9b7154d918533dbd1be188d4, NAME => 'Group_foo:Group_testCreateAndAssign,,1690171861426.4688786c9b7154d918533dbd1be188d4.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 04:11:02,392 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_foo:Group_testCreateAndAssign' as deleted. 2023-07-24 04:11:02,392 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690171862392"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:02,394 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_foo:Group_testCreateAndAssign state from META 2023-07-24 04:11:02,396 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:02,398 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign in 39 msec 2023-07-24 04:11:02,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-24 04:11:02,472 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 74 completed 2023-07-24 04:11:02,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-24 04:11:02,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 04:11:02,490 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=75, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 04:11:02,494 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=75, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 04:11:02,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-24 04:11:02,497 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=75, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 04:11:02,498 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 04:11:02,498 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 04:11:02,499 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=75, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 04:11:02,501 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=75, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 04:11:02,502 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 21 msec 2023-07-24 04:11:02,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-24 04:11:02,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:02,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:02,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:02,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:02,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:02,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:02,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:02,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:02,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:02,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 04:11:02,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 04:11:02,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:02,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:02,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:02,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:02,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37679] to rsgroup default 2023-07-24 04:11:02,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:02,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 04:11:02,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:02,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-24 04:11:02,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37679,1690171852273] are moved back to appInfo 2023-07-24 04:11:02,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-24 04:11:02,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:02,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup appInfo 2023-07-24 04:11:02,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:02,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:02,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:02,630 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:02,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:02,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:02,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:02,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:02,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:02,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:02,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:02,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:11:02,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:02,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 365 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173062643, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:11:02,644 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:02,645 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:02,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:02,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:02,647 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:02,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:02,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:02,666 INFO [Listener at localhost/41307] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=516 (was 500) Potentially hanging thread: hconnection-0xe88b18-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe88b18-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-666740306_17 at /127.0.0.1:42742 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1555149928_17 at /127.0.0.1:52592 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe88b18-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe88b18-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1555149928_17 at /127.0.0.1:42754 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-666740306_17 at /127.0.0.1:46988 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-666740306_17 at /127.0.0.1:42760 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1383649379_17 at /127.0.0.1:46490 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca-prefix:jenkins-hbase4.apache.org,39717,1690171855814.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1892649419_17 at /127.0.0.1:37224 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1383649379_17 at /127.0.0.1:40382 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1555149928_17 at /127.0.0.1:46498 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=801 (was 781) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=538 (was 538), ProcessCount=176 (was 176), AvailableMemoryMB=6191 (was 6217) 2023-07-24 04:11:02,666 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-24 04:11:02,685 INFO [Listener at localhost/41307] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=516, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=538, ProcessCount=176, AvailableMemoryMB=6189 2023-07-24 04:11:02,685 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-24 04:11:02,685 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(132): testCreateAndDrop 2023-07-24 04:11:02,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:02,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:02,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:02,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:02,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:02,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:02,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:02,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:02,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:02,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:02,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:02,701 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:02,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:02,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:02,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:02,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:02,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:02,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:02,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:02,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:11:02,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:02,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 393 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173062919, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:11:02,920 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:02,922 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:02,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:02,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:02,923 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:02,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:02,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:02,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 04:11:02,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=76, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:02,931 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=76, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 04:11:02,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndDrop" procId is: 76 2023-07-24 04:11:02,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=76 2023-07-24 04:11:02,933 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:02,934 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:02,934 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:02,936 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=76, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 04:11:02,949 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:02,950 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f empty. 2023-07-24 04:11:02,950 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:02,950 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-24 04:11:02,975 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-24 04:11:02,976 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 13af92040b93e501440f5fc9067ed36f, NAME => 'Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:11:02,988 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:02,988 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1604): Closing 13af92040b93e501440f5fc9067ed36f, disabling compactions & flushes 2023-07-24 04:11:02,988 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. 2023-07-24 04:11:02,988 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. 2023-07-24 04:11:02,988 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. after waiting 0 ms 2023-07-24 04:11:02,988 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. 2023-07-24 04:11:02,988 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. 2023-07-24 04:11:02,988 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 13af92040b93e501440f5fc9067ed36f: 2023-07-24 04:11:02,991 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=76, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 04:11:02,992 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690171862992"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171862992"}]},"ts":"1690171862992"} 2023-07-24 04:11:02,994 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 04:11:02,995 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=76, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 04:11:02,995 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171862995"}]},"ts":"1690171862995"} 2023-07-24 04:11:02,996 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLING in hbase:meta 2023-07-24 04:11:03,000 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:11:03,000 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:11:03,000 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:11:03,000 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:11:03,000 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 04:11:03,000 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:11:03,000 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=13af92040b93e501440f5fc9067ed36f, ASSIGN}] 2023-07-24 04:11:03,002 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=77, ppid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=13af92040b93e501440f5fc9067ed36f, ASSIGN 2023-07-24 04:11:03,003 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=77, ppid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=13af92040b93e501440f5fc9067ed36f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37679,1690171852273; forceNewPlan=false, retain=false 2023-07-24 04:11:03,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=76 2023-07-24 04:11:03,153 INFO [jenkins-hbase4:36883] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 04:11:03,155 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=77 updating hbase:meta row=13af92040b93e501440f5fc9067ed36f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:03,155 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690171863155"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171863155"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171863155"}]},"ts":"1690171863155"} 2023-07-24 04:11:03,157 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=77, state=RUNNABLE; OpenRegionProcedure 13af92040b93e501440f5fc9067ed36f, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:11:03,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=76 2023-07-24 04:11:03,312 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. 2023-07-24 04:11:03,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 13af92040b93e501440f5fc9067ed36f, NAME => 'Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:03,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndDrop 13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:03,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:03,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:03,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:03,314 INFO [StoreOpener-13af92040b93e501440f5fc9067ed36f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region 13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:03,316 DEBUG [StoreOpener-13af92040b93e501440f5fc9067ed36f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f/cf 2023-07-24 04:11:03,316 DEBUG [StoreOpener-13af92040b93e501440f5fc9067ed36f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f/cf 2023-07-24 04:11:03,316 INFO [StoreOpener-13af92040b93e501440f5fc9067ed36f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 13af92040b93e501440f5fc9067ed36f columnFamilyName cf 2023-07-24 04:11:03,317 INFO [StoreOpener-13af92040b93e501440f5fc9067ed36f-1] regionserver.HStore(310): Store=13af92040b93e501440f5fc9067ed36f/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:03,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:03,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:03,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:03,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:11:03,323 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 13af92040b93e501440f5fc9067ed36f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11110074240, jitterRate=0.034706294536590576}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:11:03,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 13af92040b93e501440f5fc9067ed36f: 2023-07-24 04:11:03,324 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f., pid=78, masterSystemTime=1690171863308 2023-07-24 04:11:03,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. 2023-07-24 04:11:03,326 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. 2023-07-24 04:11:03,326 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=77 updating hbase:meta row=13af92040b93e501440f5fc9067ed36f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:03,327 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690171863326"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171863326"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171863326"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171863326"}]},"ts":"1690171863326"} 2023-07-24 04:11:03,330 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=77 2023-07-24 04:11:03,330 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=77, state=SUCCESS; OpenRegionProcedure 13af92040b93e501440f5fc9067ed36f, server=jenkins-hbase4.apache.org,37679,1690171852273 in 171 msec 2023-07-24 04:11:03,332 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=76 2023-07-24 04:11:03,332 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=76, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=13af92040b93e501440f5fc9067ed36f, ASSIGN in 330 msec 2023-07-24 04:11:03,332 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=76, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 04:11:03,333 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171863333"}]},"ts":"1690171863333"} 2023-07-24 04:11:03,334 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLED in hbase:meta 2023-07-24 04:11:03,337 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=76, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 04:11:03,338 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop in 410 msec 2023-07-24 04:11:03,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=76 2023-07-24 04:11:03,543 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndDrop, procId: 76 completed 2023-07-24 04:11:03,543 DEBUG [Listener at localhost/41307] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateAndDrop get assigned. Timeout = 60000ms 2023-07-24 04:11:03,543 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:03,544 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37679] ipc.CallRunner(144): callId: 408 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:38444 deadline: 1690171923543, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=39717 startCode=1690171855814. As of locationSeqNum=74. 2023-07-24 04:11:03,648 DEBUG [hconnection-0x18426628-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:03,650 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60378, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:11:03,655 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateAndDrop assigned to meta. Checking AM states. 2023-07-24 04:11:03,656 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:03,656 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateAndDrop assigned. 2023-07-24 04:11:03,656 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:03,660 INFO [Listener at localhost/41307] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndDrop 2023-07-24 04:11:03,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCreateAndDrop 2023-07-24 04:11:03,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=79, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:03,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=79 2023-07-24 04:11:03,665 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171863665"}]},"ts":"1690171863665"} 2023-07-24 04:11:03,667 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLING in hbase:meta 2023-07-24 04:11:03,669 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCreateAndDrop to state=DISABLING 2023-07-24 04:11:03,670 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=13af92040b93e501440f5fc9067ed36f, UNASSIGN}] 2023-07-24 04:11:03,672 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=80, ppid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=13af92040b93e501440f5fc9067ed36f, UNASSIGN 2023-07-24 04:11:03,673 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=80 updating hbase:meta row=13af92040b93e501440f5fc9067ed36f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:03,673 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690171863673"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171863673"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171863673"}]},"ts":"1690171863673"} 2023-07-24 04:11:03,674 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=81, ppid=80, state=RUNNABLE; CloseRegionProcedure 13af92040b93e501440f5fc9067ed36f, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:11:03,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=79 2023-07-24 04:11:03,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:03,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 13af92040b93e501440f5fc9067ed36f, disabling compactions & flushes 2023-07-24 04:11:03,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. 2023-07-24 04:11:03,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. 2023-07-24 04:11:03,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. after waiting 0 ms 2023-07-24 04:11:03,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. 2023-07-24 04:11:03,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:11:03,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f. 2023-07-24 04:11:03,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 13af92040b93e501440f5fc9067ed36f: 2023-07-24 04:11:03,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:03,847 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=80 updating hbase:meta row=13af92040b93e501440f5fc9067ed36f, regionState=CLOSED 2023-07-24 04:11:03,847 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690171863846"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171863846"}]},"ts":"1690171863846"} 2023-07-24 04:11:03,851 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=81, resume processing ppid=80 2023-07-24 04:11:03,851 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, ppid=80, state=SUCCESS; CloseRegionProcedure 13af92040b93e501440f5fc9067ed36f, server=jenkins-hbase4.apache.org,37679,1690171852273 in 174 msec 2023-07-24 04:11:03,852 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=79 2023-07-24 04:11:03,852 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=79, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=13af92040b93e501440f5fc9067ed36f, UNASSIGN in 181 msec 2023-07-24 04:11:03,855 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171863855"}]},"ts":"1690171863855"} 2023-07-24 04:11:03,856 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLED in hbase:meta 2023-07-24 04:11:03,858 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCreateAndDrop to state=DISABLED 2023-07-24 04:11:03,864 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=79, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop in 201 msec 2023-07-24 04:11:03,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=79 2023-07-24 04:11:03,967 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndDrop, procId: 79 completed 2023-07-24 04:11:03,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCreateAndDrop 2023-07-24 04:11:03,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=82, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:03,971 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=82, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:03,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndDrop' from rsgroup 'default' 2023-07-24 04:11:03,972 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=82, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:03,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:03,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:03,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:03,976 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:03,977 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f/cf, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f/recovered.edits] 2023-07-24 04:11:03,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-24 04:11:03,984 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f/recovered.edits/4.seqid 2023-07-24 04:11:03,985 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCreateAndDrop/13af92040b93e501440f5fc9067ed36f 2023-07-24 04:11:03,985 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-24 04:11:03,987 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=82, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:03,994 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndDrop from hbase:meta 2023-07-24 04:11:03,996 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndDrop' descriptor. 2023-07-24 04:11:03,997 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=82, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:03,997 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndDrop' from region states. 2023-07-24 04:11:03,998 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171863997"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:03,999 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 04:11:03,999 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 13af92040b93e501440f5fc9067ed36f, NAME => 'Group_testCreateAndDrop,,1690171862926.13af92040b93e501440f5fc9067ed36f.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 04:11:03,999 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndDrop' as deleted. 2023-07-24 04:11:03,999 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690171863999"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:04,000 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndDrop state from META 2023-07-24 04:11:04,002 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=82, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:04,004 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=82, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop in 35 msec 2023-07-24 04:11:04,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-24 04:11:04,081 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndDrop, procId: 82 completed 2023-07-24 04:11:04,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:04,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:04,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:04,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:04,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:04,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:04,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:04,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:04,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:04,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:04,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:04,097 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:04,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:04,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:04,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:04,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:04,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:04,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:04,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:04,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:11:04,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:04,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 453 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173064109, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:11:04,110 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:04,114 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:04,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:04,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:04,115 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:04,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:04,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:04,133 INFO [Listener at localhost/41307] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=520 (was 516) Potentially hanging thread: hconnection-0xe88b18-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18426628-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1383649379_17 at /127.0.0.1:46490 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-666740306_17 at /127.0.0.1:42796 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe88b18-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=805 (was 801) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=511 (was 538), ProcessCount=176 (was 176), AvailableMemoryMB=6110 (was 6189) 2023-07-24 04:11:04,133 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-24 04:11:04,149 INFO [Listener at localhost/41307] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=520, OpenFileDescriptor=805, MaxFileDescriptor=60000, SystemLoadAverage=511, ProcessCount=176, AvailableMemoryMB=6109 2023-07-24 04:11:04,149 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-24 04:11:04,149 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(132): testCloneSnapshot 2023-07-24 04:11:04,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:04,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:04,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:04,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:04,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:04,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:04,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:04,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:04,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:04,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:04,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:04,164 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:04,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:04,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:04,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:04,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:04,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:04,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:04,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:04,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:11:04,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:04,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 481 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173064174, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:11:04,175 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:04,176 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:04,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:04,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:04,177 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:04,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:04,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:04,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 04:11:04,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=83, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:04,182 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=83, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 04:11:04,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCloneSnapshot" procId is: 83 2023-07-24 04:11:04,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-24 04:11:04,184 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:04,184 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:04,185 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:04,186 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=83, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 04:11:04,188 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856 2023-07-24 04:11:04,189 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856 empty. 2023-07-24 04:11:04,189 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856 2023-07-24 04:11:04,189 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-24 04:11:04,203 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot/.tabledesc/.tableinfo.0000000001 2023-07-24 04:11:04,204 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => 260281154dc237c5f79b733f26982856, NAME => 'Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:11:04,215 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:04,216 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1604): Closing 260281154dc237c5f79b733f26982856, disabling compactions & flushes 2023-07-24 04:11:04,216 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:04,216 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:04,216 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. after waiting 0 ms 2023-07-24 04:11:04,216 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:04,216 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:04,216 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for 260281154dc237c5f79b733f26982856: 2023-07-24 04:11:04,218 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=83, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 04:11:04,219 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690171864219"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171864219"}]},"ts":"1690171864219"} 2023-07-24 04:11:04,221 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 04:11:04,223 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=83, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 04:11:04,223 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171864223"}]},"ts":"1690171864223"} 2023-07-24 04:11:04,225 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLING in hbase:meta 2023-07-24 04:11:04,229 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:11:04,229 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:11:04,229 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:11:04,229 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:11:04,229 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 04:11:04,229 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:11:04,229 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=84, ppid=83, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=260281154dc237c5f79b733f26982856, ASSIGN}] 2023-07-24 04:11:04,231 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, ppid=83, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=260281154dc237c5f79b733f26982856, ASSIGN 2023-07-24 04:11:04,232 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=84, ppid=83, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=260281154dc237c5f79b733f26982856, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41157,1690171852333; forceNewPlan=false, retain=false 2023-07-24 04:11:04,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-24 04:11:04,383 INFO [jenkins-hbase4:36883] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 04:11:04,384 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=260281154dc237c5f79b733f26982856, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,384 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690171864384"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171864384"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171864384"}]},"ts":"1690171864384"} 2023-07-24 04:11:04,386 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; OpenRegionProcedure 260281154dc237c5f79b733f26982856, server=jenkins-hbase4.apache.org,41157,1690171852333}] 2023-07-24 04:11:04,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-24 04:11:04,542 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:04,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 260281154dc237c5f79b733f26982856, NAME => 'Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:04,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot 260281154dc237c5f79b733f26982856 2023-07-24 04:11:04,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:04,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 260281154dc237c5f79b733f26982856 2023-07-24 04:11:04,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 260281154dc237c5f79b733f26982856 2023-07-24 04:11:04,544 INFO [StoreOpener-260281154dc237c5f79b733f26982856-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region 260281154dc237c5f79b733f26982856 2023-07-24 04:11:04,546 DEBUG [StoreOpener-260281154dc237c5f79b733f26982856-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856/test 2023-07-24 04:11:04,546 DEBUG [StoreOpener-260281154dc237c5f79b733f26982856-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856/test 2023-07-24 04:11:04,547 INFO [StoreOpener-260281154dc237c5f79b733f26982856-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 260281154dc237c5f79b733f26982856 columnFamilyName test 2023-07-24 04:11:04,547 INFO [StoreOpener-260281154dc237c5f79b733f26982856-1] regionserver.HStore(310): Store=260281154dc237c5f79b733f26982856/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:04,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856 2023-07-24 04:11:04,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856 2023-07-24 04:11:04,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 260281154dc237c5f79b733f26982856 2023-07-24 04:11:04,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:11:04,556 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 260281154dc237c5f79b733f26982856; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11903081120, jitterRate=0.10856081545352936}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:11:04,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 260281154dc237c5f79b733f26982856: 2023-07-24 04:11:04,557 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856., pid=85, masterSystemTime=1690171864538 2023-07-24 04:11:04,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:04,558 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:04,559 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=260281154dc237c5f79b733f26982856, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,559 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690171864559"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171864559"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171864559"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171864559"}]},"ts":"1690171864559"} 2023-07-24 04:11:04,562 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-24 04:11:04,562 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; OpenRegionProcedure 260281154dc237c5f79b733f26982856, server=jenkins-hbase4.apache.org,41157,1690171852333 in 175 msec 2023-07-24 04:11:04,564 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=84, resume processing ppid=83 2023-07-24 04:11:04,564 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=84, ppid=83, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=260281154dc237c5f79b733f26982856, ASSIGN in 333 msec 2023-07-24 04:11:04,565 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=83, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 04:11:04,565 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171864565"}]},"ts":"1690171864565"} 2023-07-24 04:11:04,566 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLED in hbase:meta 2023-07-24 04:11:04,569 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=83, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 04:11:04,571 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=83, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot in 389 msec 2023-07-24 04:11:04,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=83 2023-07-24 04:11:04,787 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCloneSnapshot, procId: 83 completed 2023-07-24 04:11:04,787 DEBUG [Listener at localhost/41307] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCloneSnapshot get assigned. Timeout = 60000ms 2023-07-24 04:11:04,788 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:04,792 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(3484): All regions for table Group_testCloneSnapshot assigned to meta. Checking AM states. 2023-07-24 04:11:04,792 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:04,792 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(3504): All regions for table Group_testCloneSnapshot assigned. 2023-07-24 04:11:04,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1583): Client=jenkins//172.31.14.131 snapshot request for:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-24 04:11:04,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] snapshot.SnapshotDescriptionUtils(316): Creation time not specified, setting to:1690171864804 (current time:1690171864804). 2023-07-24 04:11:04,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] snapshot.SnapshotDescriptionUtils(332): Snapshot current TTL value: 0 resetting it to default value: 0 2023-07-24 04:11:04,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] zookeeper.ReadOnlyZKClient(139): Connect 0x4914bc69 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:04,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2378fdf5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:04,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:04,818 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60384, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:11:04,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4914bc69 to 127.0.0.1:59235 2023-07-24 04:11:04,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:04,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] snapshot.SnapshotManager(601): No existing snapshot, attempting snapshot... 2023-07-24 04:11:04,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] snapshot.SnapshotManager(648): Table enabled, starting distributed snapshots for { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-24 04:11:04,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=86, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-24 04:11:04,850 DEBUG [PEWorker-3] locking.LockProcedure(309): LOCKED pid=86, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-24 04:11:04,852 INFO [PEWorker-3] procedure2.TimeoutExecutorThread(81): ADDED pid=86, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE; timeout=600000, timestamp=1690172464852 2023-07-24 04:11:04,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] snapshot.SnapshotManager(653): Started snapshot: { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-24 04:11:04,852 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(174): Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot 2023-07-24 04:11:04,854 DEBUG [PEWorker-1] locking.LockProcedure(242): UNLOCKED pid=86, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-24 04:11:04,858 DEBUG [Listener at localhost/41307] client.HBaseAdmin(2418): Waiting a max of 300000 ms for snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }'' to complete. (max 20000 ms per retry) 2023-07-24 04:11:04,858 DEBUG [Listener at localhost/41307] client.HBaseAdmin(2428): (#1) Sleeping: 100ms while waiting for snapshot completion. 2023-07-24 04:11:04,861 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-24 04:11:04,863 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=86, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE in 15 msec 2023-07-24 04:11:04,864 DEBUG [PEWorker-1] locking.LockProcedure(309): LOCKED pid=87, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-24 04:11:04,866 INFO [PEWorker-1] procedure2.TimeoutExecutorThread(81): ADDED pid=87, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED; timeout=600000, timestamp=1690172464866 2023-07-24 04:11:04,899 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] procedure.ProcedureCoordinator(165): Submitting procedure Group_testCloneSnapshot_snap 2023-07-24 04:11:04,899 INFO [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'Group_testCloneSnapshot_snap' 2023-07-24 04:11:04,899 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-24 04:11:04,900 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'Group_testCloneSnapshot_snap' starting 'acquire' 2023-07-24 04:11:04,900 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'Group_testCloneSnapshot_snap', kicking off acquire phase on members. 2023-07-24 04:11:04,900 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,900 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,902 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 04:11:04,902 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 04:11:04,902 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 04:11:04,902 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 04:11:04,902 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,902 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 04:11:04,902 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 04:11:04,902 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 04:11:04,902 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 04:11:04,902 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:04,902 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:04,902 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:04,902 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,902 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:04,902 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-07-24 04:11:04,903 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,903 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,903 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,903 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,903 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,903 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,903 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,903 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-24 04:11:04,903 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,903 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,903 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-24 04:11:04,904 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,904 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-24 04:11:04,904 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-24 04:11:04,904 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-24 04:11:04,904 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,904 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-24 04:11:04,904 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-24 04:11:04,904 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,904 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-24 04:11:04,906 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-24 04:11:04,908 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-24 04:11:04,909 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-24 04:11:04,909 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-24 04:11:04,909 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-24 04:11:04,909 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-24 04:11:04,909 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-24 04:11:04,909 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-24 04:11:04,909 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-24 04:11:04,909 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-24 04:11:04,910 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-24 04:11:04,911 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-24 04:11:04,911 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-24 04:11:04,909 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-24 04:11:04,909 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-24 04:11:04,912 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-24 04:11:04,911 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,39717,1690171855814' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-24 04:11:04,911 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-24 04:11:04,912 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-24 04:11:04,912 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-24 04:11:04,912 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-24 04:11:04,912 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,37679,1690171852273' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-24 04:11:04,912 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,43785,1690171856375' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-24 04:11:04,912 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,41157,1690171852333' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-24 04:11:04,914 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,918 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,918 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,918 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,918 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,918 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-24 04:11:04,918 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-24 04:11:04,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-24 04:11:04,919 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,919 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-24 04:11:04,919 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,919 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-24 04:11:04,919 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,919 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-24 04:11:04,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-24 04:11:04,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-24 04:11:04,920 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-24 04:11:04,920 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:11:04,920 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:04,921 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,921 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:04,921 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-24 04:11:04,921 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,41157,1690171852333' joining acquired barrier for procedure 'Group_testCloneSnapshot_snap' on coordinator 2023-07-24 04:11:04,921 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'Group_testCloneSnapshot_snap' starting 'in-barrier' execution. 2023-07-24 04:11:04,921 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@24a4ae4f[Count = 0] remaining members to acquire global barrier 2023-07-24 04:11:04,921 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,923 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,923 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,923 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,923 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,923 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,923 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-24 04:11:04,923 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,923 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,923 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,923 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-07-24 04:11:04,923 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,923 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-24 04:11:04,923 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-24 04:11:04,924 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-24 04:11:04,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,924 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,39717,1690171855814' in zk 2023-07-24 04:11:04,924 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-24 04:11:04,924 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-24 04:11:04,924 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,37679,1690171852273' in zk 2023-07-24 04:11:04,924 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-24 04:11:04,924 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,43785,1690171856375' in zk 2023-07-24 04:11:04,924 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] snapshot.FlushSnapshotSubprocedure(170): Flush Snapshot Tasks submitted for 1 regions 2023-07-24 04:11:04,924 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(301): Waiting for local region snapshots to finish. 2023-07-24 04:11:04,924 DEBUG [rs(jenkins-hbase4.apache.org,41157,1690171852333)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Starting snapshot operation on Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:04,925 DEBUG [rs(jenkins-hbase4.apache.org,41157,1690171852333)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(110): Flush Snapshotting region Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. started... 2023-07-24 04:11:04,926 DEBUG [rs(jenkins-hbase4.apache.org,41157,1690171852333)-snapshot-pool-0] regionserver.HRegion(2446): Flush status journal for 260281154dc237c5f79b733f26982856: 2023-07-24 04:11:04,927 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-24 04:11:04,927 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-24 04:11:04,927 DEBUG [member: 'jenkins-hbase4.apache.org,39717,1690171855814' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-24 04:11:04,928 DEBUG [rs(jenkins-hbase4.apache.org,41157,1690171852333)-snapshot-pool-0] snapshot.SnapshotManifest(238): Storing 'Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856.' region-info for snapshot=Group_testCloneSnapshot_snap 2023-07-24 04:11:04,930 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-24 04:11:04,930 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-24 04:11:04,930 DEBUG [member: 'jenkins-hbase4.apache.org,43785,1690171856375' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-24 04:11:04,934 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-24 04:11:04,935 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-24 04:11:04,935 DEBUG [member: 'jenkins-hbase4.apache.org,37679,1690171852273' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-24 04:11:04,938 DEBUG [rs(jenkins-hbase4.apache.org,41157,1690171852333)-snapshot-pool-0] snapshot.SnapshotManifest(243): Creating references for hfiles 2023-07-24 04:11:04,942 DEBUG [rs(jenkins-hbase4.apache.org,41157,1690171852333)-snapshot-pool-0] snapshot.SnapshotManifest(253): Adding snapshot references for [] hfiles 2023-07-24 04:11:04,959 DEBUG [Listener at localhost/41307] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-24 04:11:04,959 DEBUG [rs(jenkins-hbase4.apache.org,41157,1690171852333)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(137): ... Flush Snapshotting region Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. completed. 2023-07-24 04:11:04,959 DEBUG [rs(jenkins-hbase4.apache.org,41157,1690171852333)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(140): Closing snapshot operation on Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:04,959 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(312): Completed 1/1 local region snapshots. 2023-07-24 04:11:04,959 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(314): Completed 1 local region snapshots. 2023-07-24 04:11:04,960 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(345): cancelling 0 tasks for snapshot jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,960 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-24 04:11:04,960 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,41157,1690171852333' in zk 2023-07-24 04:11:04,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-24 04:11:04,963 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-24 04:11:04,963 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,963 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-24 04:11:04,963 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,963 DEBUG [member: 'jenkins-hbase4.apache.org,41157,1690171852333' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-24 04:11:04,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] snapshot.SnapshotManager(404): Snapshoting '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' is still in progress! 2023-07-24 04:11:04,964 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-24 04:11:04,964 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-24 04:11:04,965 DEBUG [Listener at localhost/41307] client.HBaseAdmin(2428): (#2) Sleeping: 200ms while waiting for snapshot completion. 2023-07-24 04:11:04,965 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-24 04:11:04,965 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-24 04:11:04,965 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-24 04:11:04,966 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:11:04,966 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:04,966 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,967 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:04,967 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-24 04:11:04,967 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-24 04:11:04,968 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:11:04,968 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:04,968 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,969 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:04,969 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'Group_testCloneSnapshot_snap' member 'jenkins-hbase4.apache.org,41157,1690171852333': 2023-07-24 04:11:04,970 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,41157,1690171852333' released barrier for procedure'Group_testCloneSnapshot_snap', counting down latch. Waiting for 0 more 2023-07-24 04:11:04,970 INFO [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'Group_testCloneSnapshot_snap' execution completed 2023-07-24 04:11:04,970 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-07-24 04:11:04,970 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-07-24 04:11:04,970 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:Group_testCloneSnapshot_snap 2023-07-24 04:11:04,970 INFO [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure Group_testCloneSnapshot_snapincluding nodes /hbase/online-snapshot/acquired /hbase/online-snapshot/reached /hbase/online-snapshot/abort 2023-07-24 04:11:04,972 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,972 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,972 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,972 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,972 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,972 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,972 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,972 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,972 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 04:11:04,972 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,972 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-24 04:11:04,972 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-24 04:11:04,972 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,973 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,972 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 04:11:04,972 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 04:11:04,972 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,973 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,973 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 04:11:04,972 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 04:11:04,973 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:04,973 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 04:11:04,972 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,973 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:04,973 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-24 04:11:04,973 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:11:04,973 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 04:11:04,974 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 04:11:04,974 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,973 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,974 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:04,974 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:04,974 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-24 04:11:04,974 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:04,974 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,974 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,974 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-24 04:11:04,974 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,975 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:04,975 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-24 04:11:04,975 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:11:04,976 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:04,976 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,976 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:04,977 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-24 04:11:04,977 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-24 04:11:04,977 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:11:04,977 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:11:04,977 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:04,978 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:04,978 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,978 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,978 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:04,978 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:04,982 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:04,982 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 04:11:04,982 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 04:11:04,982 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,982 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 04:11:04,982 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:04,982 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,982 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 04:11:04,982 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 04:11:04,983 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:04,983 DEBUG [(jenkins-hbase4.apache.org,36883,1690171850269)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-24 04:11:04,983 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.EnabledTableSnapshotHandler(97): Done waiting - online snapshot for Group_testCloneSnapshot_snap 2023-07-24 04:11:04,982 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:04,982 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 04:11:04,984 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.SnapshotManifest(484): Convert to Single Snapshot Manifest for Group_testCloneSnapshot_snap 2023-07-24 04:11:04,982 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 04:11:04,984 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 04:11:04,984 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:04,984 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 04:11:04,984 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 04:11:04,984 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:11:04,983 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 04:11:04,985 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,985 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 04:11:04,983 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 04:11:04,985 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:04,983 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 04:11:04,985 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:04,985 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:04,985 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,984 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:04,985 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:04,985 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:04,985 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:11:04,985 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,985 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 04:11:04,985 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 04:11:04,985 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:04,986 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 04:11:04,986 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:04,987 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.SnapshotManifestV1(126): No regions under directory:hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-24 04:11:05,153 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 04:11:05,165 DEBUG [Listener at localhost/41307] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-24 04:11:05,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-24 04:11:05,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] snapshot.SnapshotManager(404): Snapshoting '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' is still in progress! 2023-07-24 04:11:05,167 DEBUG [Listener at localhost/41307] client.HBaseAdmin(2428): (#3) Sleeping: 300ms while waiting for snapshot completion. 2023-07-24 04:11:05,428 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.SnapshotDescriptionUtils(404): Sentinel is done, just moving the snapshot from hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.hbase-snapshot/Group_testCloneSnapshot_snap 2023-07-24 04:11:05,467 DEBUG [Listener at localhost/41307] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-24 04:11:05,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-24 04:11:05,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] snapshot.SnapshotManager(404): Snapshoting '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' is still in progress! 2023-07-24 04:11:05,469 DEBUG [Listener at localhost/41307] client.HBaseAdmin(2428): (#4) Sleeping: 500ms while waiting for snapshot completion. 2023-07-24 04:11:05,502 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(229): Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed 2023-07-24 04:11:05,502 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(246): Launching cleanup of working dir:hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-24 04:11:05,503 ERROR [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(251): Couldn't delete snapshot working directory:hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-24 04:11:05,503 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(257): Table snapshot journal : Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot at 1690171864852Consolidate snapshot: Group_testCloneSnapshot_snap at 1690171864984 (+132 ms)Loading Region manifests for Group_testCloneSnapshot_snap at 1690171864984Writing data manifest for Group_testCloneSnapshot_snap at 1690171864995 (+11 ms)Verifying snapshot: Group_testCloneSnapshot_snap at 1690171865415 (+420 ms)Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed at 1690171865502 (+87 ms) 2023-07-24 04:11:05,505 DEBUG [PEWorker-5] locking.LockProcedure(242): UNLOCKED pid=87, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-24 04:11:05,507 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED in 651 msec 2023-07-24 04:11:05,969 DEBUG [Listener at localhost/41307] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-24 04:11:05,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-24 04:11:05,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] snapshot.SnapshotManager(401): Snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' has completed, notifying client. 2023-07-24 04:11:05,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint(486): Pre-moving table Group_testCloneSnapshot_clone to RSGroup default 2023-07-24 04:11:05,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:05,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:05,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:05,991 ERROR [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(742): TableDescriptor of table {} not found. Skipping the region movement of this table. 2023-07-24 04:11:06,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:CLONE_SNAPSHOT_PRE_OPERATION; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690171864804 type: FLUSH version: 2 ttl: 0 ) 2023-07-24 04:11:06,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] snapshot.SnapshotManager(750): Clone snapshot=Group_testCloneSnapshot_snap as table=Group_testCloneSnapshot_clone 2023-07-24 04:11:06,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 04:11:06,038 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot_clone/.tabledesc/.tableinfo.0000000001 2023-07-24 04:11:06,045 INFO [PEWorker-2] snapshot.RestoreSnapshotHelper(177): starting restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690171864804 type: FLUSH version: 2 ttl: 0 2023-07-24 04:11:06,045 DEBUG [PEWorker-2] snapshot.RestoreSnapshotHelper(785): get table regions: hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot_clone 2023-07-24 04:11:06,046 INFO [PEWorker-2] snapshot.RestoreSnapshotHelper(239): region to add: 260281154dc237c5f79b733f26982856 2023-07-24 04:11:06,047 INFO [PEWorker-2] snapshot.RestoreSnapshotHelper(585): clone region=260281154dc237c5f79b733f26982856 as dd4dc6495b8c558ad83598d09d3e60bf in snapshot Group_testCloneSnapshot_snap 2023-07-24 04:11:06,049 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => dd4dc6495b8c558ad83598d09d3e60bf, NAME => 'Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot_clone', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:11:06,088 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:06,089 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1604): Closing dd4dc6495b8c558ad83598d09d3e60bf, disabling compactions & flushes 2023-07-24 04:11:06,089 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. 2023-07-24 04:11:06,089 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. 2023-07-24 04:11:06,089 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. after waiting 0 ms 2023-07-24 04:11:06,089 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. 2023-07-24 04:11:06,089 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. 2023-07-24 04:11:06,089 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for dd4dc6495b8c558ad83598d09d3e60bf: 2023-07-24 04:11:06,089 INFO [PEWorker-2] snapshot.RestoreSnapshotHelper(266): finishing restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690171864804 type: FLUSH version: 2 ttl: 0 2023-07-24 04:11:06,090 INFO [PEWorker-2] procedure.CloneSnapshotProcedure$1(421): Clone snapshot=Group_testCloneSnapshot_snap on table=Group_testCloneSnapshot_clone completed! 2023-07-24 04:11:06,095 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690171866095"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171866095"}]},"ts":"1690171866095"} 2023-07-24 04:11:06,096 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 04:11:06,097 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171866097"}]},"ts":"1690171866097"} 2023-07-24 04:11:06,099 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLING in hbase:meta 2023-07-24 04:11:06,103 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:11:06,104 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:11:06,104 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:11:06,104 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:11:06,104 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 04:11:06,104 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:11:06,104 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=dd4dc6495b8c558ad83598d09d3e60bf, ASSIGN}] 2023-07-24 04:11:06,106 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=dd4dc6495b8c558ad83598d09d3e60bf, ASSIGN 2023-07-24 04:11:06,107 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=dd4dc6495b8c558ad83598d09d3e60bf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37679,1690171852273; forceNewPlan=false, retain=false 2023-07-24 04:11:06,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 04:11:06,258 INFO [jenkins-hbase4:36883] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 04:11:06,263 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=dd4dc6495b8c558ad83598d09d3e60bf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:06,263 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690171866263"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171866263"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171866263"}]},"ts":"1690171866263"} 2023-07-24 04:11:06,266 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE; OpenRegionProcedure dd4dc6495b8c558ad83598d09d3e60bf, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:11:06,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 04:11:06,327 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testCloneSnapshot' 2023-07-24 04:11:06,424 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. 2023-07-24 04:11:06,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dd4dc6495b8c558ad83598d09d3e60bf, NAME => 'Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:06,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot_clone dd4dc6495b8c558ad83598d09d3e60bf 2023-07-24 04:11:06,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:06,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dd4dc6495b8c558ad83598d09d3e60bf 2023-07-24 04:11:06,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dd4dc6495b8c558ad83598d09d3e60bf 2023-07-24 04:11:06,426 INFO [StoreOpener-dd4dc6495b8c558ad83598d09d3e60bf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region dd4dc6495b8c558ad83598d09d3e60bf 2023-07-24 04:11:06,427 DEBUG [StoreOpener-dd4dc6495b8c558ad83598d09d3e60bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCloneSnapshot_clone/dd4dc6495b8c558ad83598d09d3e60bf/test 2023-07-24 04:11:06,427 DEBUG [StoreOpener-dd4dc6495b8c558ad83598d09d3e60bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCloneSnapshot_clone/dd4dc6495b8c558ad83598d09d3e60bf/test 2023-07-24 04:11:06,428 INFO [StoreOpener-dd4dc6495b8c558ad83598d09d3e60bf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dd4dc6495b8c558ad83598d09d3e60bf columnFamilyName test 2023-07-24 04:11:06,429 INFO [StoreOpener-dd4dc6495b8c558ad83598d09d3e60bf-1] regionserver.HStore(310): Store=dd4dc6495b8c558ad83598d09d3e60bf/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:06,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCloneSnapshot_clone/dd4dc6495b8c558ad83598d09d3e60bf 2023-07-24 04:11:06,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCloneSnapshot_clone/dd4dc6495b8c558ad83598d09d3e60bf 2023-07-24 04:11:06,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dd4dc6495b8c558ad83598d09d3e60bf 2023-07-24 04:11:06,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCloneSnapshot_clone/dd4dc6495b8c558ad83598d09d3e60bf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:11:06,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dd4dc6495b8c558ad83598d09d3e60bf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11484670240, jitterRate=0.06959326565265656}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:11:06,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dd4dc6495b8c558ad83598d09d3e60bf: 2023-07-24 04:11:06,436 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf., pid=90, masterSystemTime=1690171866418 2023-07-24 04:11:06,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. 2023-07-24 04:11:06,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. 2023-07-24 04:11:06,438 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=dd4dc6495b8c558ad83598d09d3e60bf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:06,439 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690171866438"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171866438"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171866438"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171866438"}]},"ts":"1690171866438"} 2023-07-24 04:11:06,442 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-24 04:11:06,442 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; OpenRegionProcedure dd4dc6495b8c558ad83598d09d3e60bf, server=jenkins-hbase4.apache.org,37679,1690171852273 in 174 msec 2023-07-24 04:11:06,444 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-24 04:11:06,444 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=dd4dc6495b8c558ad83598d09d3e60bf, ASSIGN in 338 msec 2023-07-24 04:11:06,445 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171866445"}]},"ts":"1690171866445"} 2023-07-24 04:11:06,446 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLED in hbase:meta 2023-07-24 04:11:06,451 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690171864804 type: FLUSH version: 2 ttl: 0 ) in 450 msec 2023-07-24 04:11:06,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 04:11:06,621 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: MODIFY, Table Name: default:Group_testCloneSnapshot_clone, procId: 88 completed 2023-07-24 04:11:06,623 INFO [Listener at localhost/41307] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot 2023-07-24 04:11:06,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCloneSnapshot 2023-07-24 04:11:06,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:06,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-24 04:11:06,627 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171866627"}]},"ts":"1690171866627"} 2023-07-24 04:11:06,629 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLING in hbase:meta 2023-07-24 04:11:06,630 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot to state=DISABLING 2023-07-24 04:11:06,631 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=260281154dc237c5f79b733f26982856, UNASSIGN}] 2023-07-24 04:11:06,635 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=260281154dc237c5f79b733f26982856, UNASSIGN 2023-07-24 04:11:06,636 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=260281154dc237c5f79b733f26982856, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:06,636 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690171866636"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171866636"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171866636"}]},"ts":"1690171866636"} 2023-07-24 04:11:06,637 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; CloseRegionProcedure 260281154dc237c5f79b733f26982856, server=jenkins-hbase4.apache.org,41157,1690171852333}] 2023-07-24 04:11:06,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-24 04:11:06,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 260281154dc237c5f79b733f26982856 2023-07-24 04:11:06,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 260281154dc237c5f79b733f26982856, disabling compactions & flushes 2023-07-24 04:11:06,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:06,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:06,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. after waiting 0 ms 2023-07-24 04:11:06,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:06,795 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856/recovered.edits/5.seqid, newMaxSeqId=5, maxSeqId=1 2023-07-24 04:11:06,796 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856. 2023-07-24 04:11:06,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 260281154dc237c5f79b733f26982856: 2023-07-24 04:11:06,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 260281154dc237c5f79b733f26982856 2023-07-24 04:11:06,799 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=260281154dc237c5f79b733f26982856, regionState=CLOSED 2023-07-24 04:11:06,799 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690171866799"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171866799"}]},"ts":"1690171866799"} 2023-07-24 04:11:06,802 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-24 04:11:06,802 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; CloseRegionProcedure 260281154dc237c5f79b733f26982856, server=jenkins-hbase4.apache.org,41157,1690171852333 in 163 msec 2023-07-24 04:11:06,804 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-24 04:11:06,804 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=260281154dc237c5f79b733f26982856, UNASSIGN in 171 msec 2023-07-24 04:11:06,805 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171866805"}]},"ts":"1690171866805"} 2023-07-24 04:11:06,806 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLED in hbase:meta 2023-07-24 04:11:06,812 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot to state=DISABLED 2023-07-24 04:11:06,814 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot in 190 msec 2023-07-24 04:11:06,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-24 04:11:06,930 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot, procId: 91 completed 2023-07-24 04:11:06,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCloneSnapshot 2023-07-24 04:11:06,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:06,933 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=94, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:06,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot' from rsgroup 'default' 2023-07-24 04:11:06,934 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=94, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:06,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:06,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:06,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:06,938 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856 2023-07-24 04:11:06,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 04:11:06,940 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856/recovered.edits, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856/test] 2023-07-24 04:11:06,945 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856/recovered.edits/5.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856/recovered.edits/5.seqid 2023-07-24 04:11:06,946 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot/260281154dc237c5f79b733f26982856 2023-07-24 04:11:06,947 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-24 04:11:06,949 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=94, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:06,951 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot from hbase:meta 2023-07-24 04:11:06,952 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot' descriptor. 2023-07-24 04:11:06,954 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=94, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:06,954 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot' from region states. 2023-07-24 04:11:06,954 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171866954"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:06,955 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 04:11:06,956 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 260281154dc237c5f79b733f26982856, NAME => 'Group_testCloneSnapshot,,1690171864179.260281154dc237c5f79b733f26982856.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 04:11:06,956 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot' as deleted. 2023-07-24 04:11:06,956 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690171866956"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:06,957 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot state from META 2023-07-24 04:11:06,959 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=94, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:06,960 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot in 28 msec 2023-07-24 04:11:07,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 04:11:07,042 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot, procId: 94 completed 2023-07-24 04:11:07,042 INFO [Listener at localhost/41307] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot_clone 2023-07-24 04:11:07,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCloneSnapshot_clone 2023-07-24 04:11:07,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=95, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 04:11:07,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=95 2023-07-24 04:11:07,048 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171867047"}]},"ts":"1690171867047"} 2023-07-24 04:11:07,049 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLING in hbase:meta 2023-07-24 04:11:07,052 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot_clone to state=DISABLING 2023-07-24 04:11:07,052 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=dd4dc6495b8c558ad83598d09d3e60bf, UNASSIGN}] 2023-07-24 04:11:07,054 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=96, ppid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=dd4dc6495b8c558ad83598d09d3e60bf, UNASSIGN 2023-07-24 04:11:07,056 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=96 updating hbase:meta row=dd4dc6495b8c558ad83598d09d3e60bf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:07,056 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690171867056"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171867056"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171867056"}]},"ts":"1690171867056"} 2023-07-24 04:11:07,058 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=97, ppid=96, state=RUNNABLE; CloseRegionProcedure dd4dc6495b8c558ad83598d09d3e60bf, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:11:07,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=95 2023-07-24 04:11:07,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dd4dc6495b8c558ad83598d09d3e60bf 2023-07-24 04:11:07,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dd4dc6495b8c558ad83598d09d3e60bf, disabling compactions & flushes 2023-07-24 04:11:07,213 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. 2023-07-24 04:11:07,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. 2023-07-24 04:11:07,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. after waiting 0 ms 2023-07-24 04:11:07,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. 2023-07-24 04:11:07,218 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/default/Group_testCloneSnapshot_clone/dd4dc6495b8c558ad83598d09d3e60bf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:11:07,219 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf. 2023-07-24 04:11:07,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dd4dc6495b8c558ad83598d09d3e60bf: 2023-07-24 04:11:07,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dd4dc6495b8c558ad83598d09d3e60bf 2023-07-24 04:11:07,222 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=96 updating hbase:meta row=dd4dc6495b8c558ad83598d09d3e60bf, regionState=CLOSED 2023-07-24 04:11:07,223 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690171867222"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171867222"}]},"ts":"1690171867222"} 2023-07-24 04:11:07,226 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=97, resume processing ppid=96 2023-07-24 04:11:07,226 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, ppid=96, state=SUCCESS; CloseRegionProcedure dd4dc6495b8c558ad83598d09d3e60bf, server=jenkins-hbase4.apache.org,37679,1690171852273 in 166 msec 2023-07-24 04:11:07,228 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-24 04:11:07,228 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=dd4dc6495b8c558ad83598d09d3e60bf, UNASSIGN in 174 msec 2023-07-24 04:11:07,229 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171867229"}]},"ts":"1690171867229"} 2023-07-24 04:11:07,230 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLED in hbase:meta 2023-07-24 04:11:07,233 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot_clone to state=DISABLED 2023-07-24 04:11:07,235 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone in 190 msec 2023-07-24 04:11:07,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=95 2023-07-24 04:11:07,349 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot_clone, procId: 95 completed 2023-07-24 04:11:07,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCloneSnapshot_clone 2023-07-24 04:11:07,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=98, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 04:11:07,353 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=98, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 04:11:07,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot_clone' from rsgroup 'default' 2023-07-24 04:11:07,354 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=98, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 04:11:07,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:07,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:07,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:07,358 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot_clone/dd4dc6495b8c558ad83598d09d3e60bf 2023-07-24 04:11:07,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=98 2023-07-24 04:11:07,360 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot_clone/dd4dc6495b8c558ad83598d09d3e60bf/recovered.edits, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot_clone/dd4dc6495b8c558ad83598d09d3e60bf/test] 2023-07-24 04:11:07,365 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot_clone/dd4dc6495b8c558ad83598d09d3e60bf/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/default/Group_testCloneSnapshot_clone/dd4dc6495b8c558ad83598d09d3e60bf/recovered.edits/4.seqid 2023-07-24 04:11:07,366 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/default/Group_testCloneSnapshot_clone/dd4dc6495b8c558ad83598d09d3e60bf 2023-07-24 04:11:07,366 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot_clone regions 2023-07-24 04:11:07,369 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=98, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 04:11:07,371 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot_clone from hbase:meta 2023-07-24 04:11:07,372 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot_clone' descriptor. 2023-07-24 04:11:07,374 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=98, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 04:11:07,374 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot_clone' from region states. 2023-07-24 04:11:07,374 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171867374"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:07,377 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 04:11:07,377 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => dd4dc6495b8c558ad83598d09d3e60bf, NAME => 'Group_testCloneSnapshot_clone,,1690171864179.dd4dc6495b8c558ad83598d09d3e60bf.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 04:11:07,377 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot_clone' as deleted. 2023-07-24 04:11:07,377 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690171867377"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:07,382 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot_clone state from META 2023-07-24 04:11:07,385 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=98, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 04:11:07,386 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone in 35 msec 2023-07-24 04:11:07,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=98 2023-07-24 04:11:07,460 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot_clone, procId: 98 completed 2023-07-24 04:11:07,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:07,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:07,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:07,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:07,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:07,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:07,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:07,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:07,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:07,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:07,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:07,475 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:07,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:07,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:07,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:07,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:07,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:07,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:07,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:07,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:11:07,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:07,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 569 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173067487, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:11:07,488 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:07,490 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:07,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:07,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:07,491 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:07,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:07,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:07,514 INFO [Listener at localhost/41307] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=517 (was 520), OpenFileDescriptor=798 (was 805), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=511 (was 511), ProcessCount=176 (was 176), AvailableMemoryMB=5991 (was 6109) 2023-07-24 04:11:07,514 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-24 04:11:07,532 INFO [Listener at localhost/41307] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=517, OpenFileDescriptor=798, MaxFileDescriptor=60000, SystemLoadAverage=511, ProcessCount=176, AvailableMemoryMB=5987 2023-07-24 04:11:07,532 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-24 04:11:07,532 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(132): testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:07,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:07,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:07,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:07,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:07,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:07,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:07,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:07,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:07,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:07,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:07,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:07,546 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:07,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:07,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:07,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:07,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:07,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:07,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:07,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:07,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:11:07,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:07,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 597 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173067559, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:11:07,560 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:07,562 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:07,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:07,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:07,563 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:07,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:07,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:07,565 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBasics(141): testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:07,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:07,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:07,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup appInfo 2023-07-24 04:11:07,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:07,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:07,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 04:11:07,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:07,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:07,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:07,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:07,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37679] to rsgroup appInfo 2023-07-24 04:11:07,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:07,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:07,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 04:11:07,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:07,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 04:11:07,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37679,1690171852273] are moved back to default 2023-07-24 04:11:07,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-24 04:11:07,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:07,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:07,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:07,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=appInfo 2023-07-24 04:11:07,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:07,609 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-24 04:11:07,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.ServerManager(636): Server jenkins-hbase4.apache.org,37679,1690171852273 added to draining server list. 2023-07-24 04:11:07,612 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/draining/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:07,613 WARN [zk-event-processor-pool-0] master.ServerManager(632): Server jenkins-hbase4.apache.org,37679,1690171852273 is already in the draining server list.Ignoring request to add it again. 2023-07-24 04:11:07,613 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(92): Draining RS node created, adding to list [jenkins-hbase4.apache.org,37679,1690171852273] 2023-07-24 04:11:07,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_ns', hbase.rsgroup.name => 'appInfo'} 2023-07-24 04:11:07,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=99, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_ns 2023-07-24 04:11:07,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=99 2023-07-24 04:11:07,628 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 04:11:07,632 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns in 14 msec 2023-07-24 04:11:07,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=99 2023-07-24 04:11:07,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 04:11:07,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:07,728 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 04:11:07,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 100 2023-07-24 04:11:07,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-24 04:11:07,746 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=100, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers exec-time=21 msec 2023-07-24 04:11:07,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-24 04:11:07,833 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 100 failed with No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to 2023-07-24 04:11:07,834 DEBUG [Listener at localhost/41307] rsgroup.TestRSGroupsBasics(162): create table error org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to at java.lang.Thread.getStackTrace(Thread.java:1564) at org.apache.hadoop.hbase.util.FutureUtils.setStackTrace(FutureUtils.java:130) at org.apache.hadoop.hbase.util.FutureUtils.rethrow(FutureUtils.java:149) at org.apache.hadoop.hbase.util.FutureUtils.get(FutureUtils.java:186) at org.apache.hadoop.hbase.client.Admin.createTable(Admin.java:302) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.testCreateWhenRsgroupNoOnlineServers(TestRSGroupsBasics.java:159) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) at --------Future.get--------(Unknown Source) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.validateRSGroup(RSGroupAdminEndpoint.java:540) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.moveTableToValidRSGroup(RSGroupAdminEndpoint.java:529) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateTableAction(RSGroupAdminEndpoint.java:501) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:371) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateTableAction(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.preCreate(CreateTableProcedure.java:267) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:93) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-24 04:11:07,843 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/draining/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:07,843 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-24 04:11:07,843 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(109): Draining RS node deleted, removing from list [jenkins-hbase4.apache.org,37679,1690171852273] 2023-07-24 04:11:07,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 04:11:07,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:07,850 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=101, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 04:11:07,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 101 2023-07-24 04:11:07,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-24 04:11:07,852 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:07,852 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:07,853 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 04:11:07,853 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:07,855 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=101, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 04:11:07,856 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:07,857 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5 empty. 2023-07-24 04:11:07,857 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:07,857 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-24 04:11:07,870 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/.tabledesc/.tableinfo.0000000001 2023-07-24 04:11:07,871 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7c8bf056f56f1e8782cc13c568048ec5, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:11:07,880 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:07,880 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1604): Closing 7c8bf056f56f1e8782cc13c568048ec5, disabling compactions & flushes 2023-07-24 04:11:07,880 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. 2023-07-24 04:11:07,880 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. 2023-07-24 04:11:07,880 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. after waiting 0 ms 2023-07-24 04:11:07,880 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. 2023-07-24 04:11:07,880 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. 2023-07-24 04:11:07,880 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1558): Region close journal for 7c8bf056f56f1e8782cc13c568048ec5: 2023-07-24 04:11:07,883 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=101, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 04:11:07,884 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171867884"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171867884"}]},"ts":"1690171867884"} 2023-07-24 04:11:07,885 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 04:11:07,886 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=101, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 04:11:07,886 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171867886"}]},"ts":"1690171867886"} 2023-07-24 04:11:07,887 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLING in hbase:meta 2023-07-24 04:11:07,890 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=7c8bf056f56f1e8782cc13c568048ec5, ASSIGN}] 2023-07-24 04:11:07,892 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=7c8bf056f56f1e8782cc13c568048ec5, ASSIGN 2023-07-24 04:11:07,893 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=7c8bf056f56f1e8782cc13c568048ec5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37679,1690171852273; forceNewPlan=false, retain=false 2023-07-24 04:11:07,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-24 04:11:08,044 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=102 updating hbase:meta row=7c8bf056f56f1e8782cc13c568048ec5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:08,044 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171868044"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171868044"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171868044"}]},"ts":"1690171868044"} 2023-07-24 04:11:08,049 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=102, state=RUNNABLE; OpenRegionProcedure 7c8bf056f56f1e8782cc13c568048ec5, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:11:08,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-24 04:11:08,205 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. 2023-07-24 04:11:08,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7c8bf056f56f1e8782cc13c568048ec5, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:08,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testCreateWhenRsgroupNoOnlineServers 7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:08,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:08,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:08,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:08,207 INFO [StoreOpener-7c8bf056f56f1e8782cc13c568048ec5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:08,208 DEBUG [StoreOpener-7c8bf056f56f1e8782cc13c568048ec5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5/f 2023-07-24 04:11:08,208 DEBUG [StoreOpener-7c8bf056f56f1e8782cc13c568048ec5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5/f 2023-07-24 04:11:08,209 INFO [StoreOpener-7c8bf056f56f1e8782cc13c568048ec5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7c8bf056f56f1e8782cc13c568048ec5 columnFamilyName f 2023-07-24 04:11:08,209 INFO [StoreOpener-7c8bf056f56f1e8782cc13c568048ec5-1] regionserver.HStore(310): Store=7c8bf056f56f1e8782cc13c568048ec5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:08,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:08,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:08,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:08,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:11:08,216 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7c8bf056f56f1e8782cc13c568048ec5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11546832320, jitterRate=0.07538256049156189}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:11:08,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7c8bf056f56f1e8782cc13c568048ec5: 2023-07-24 04:11:08,216 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5., pid=103, masterSystemTime=1690171868201 2023-07-24 04:11:08,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. 2023-07-24 04:11:08,218 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. 2023-07-24 04:11:08,218 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=102 updating hbase:meta row=7c8bf056f56f1e8782cc13c568048ec5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:08,219 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171868218"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171868218"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171868218"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171868218"}]},"ts":"1690171868218"} 2023-07-24 04:11:08,222 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=102 2023-07-24 04:11:08,222 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=102, state=SUCCESS; OpenRegionProcedure 7c8bf056f56f1e8782cc13c568048ec5, server=jenkins-hbase4.apache.org,37679,1690171852273 in 174 msec 2023-07-24 04:11:08,223 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-24 04:11:08,223 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=7c8bf056f56f1e8782cc13c568048ec5, ASSIGN in 332 msec 2023-07-24 04:11:08,224 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=101, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 04:11:08,224 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171868224"}]},"ts":"1690171868224"} 2023-07-24 04:11:08,225 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLED in hbase:meta 2023-07-24 04:11:08,228 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=101, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 04:11:08,229 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 381 msec 2023-07-24 04:11:08,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-24 04:11:08,454 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 101 completed 2023-07-24 04:11:08,455 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:08,460 INFO [Listener at localhost/41307] client.HBaseAdmin$15(890): Started disable of Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:08,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:08,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:08,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-24 04:11:08,465 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171868465"}]},"ts":"1690171868465"} 2023-07-24 04:11:08,467 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLING in hbase:meta 2023-07-24 04:11:08,469 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLING 2023-07-24 04:11:08,469 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=7c8bf056f56f1e8782cc13c568048ec5, UNASSIGN}] 2023-07-24 04:11:08,472 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=7c8bf056f56f1e8782cc13c568048ec5, UNASSIGN 2023-07-24 04:11:08,473 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=7c8bf056f56f1e8782cc13c568048ec5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:08,473 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171868473"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171868473"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171868473"}]},"ts":"1690171868473"} 2023-07-24 04:11:08,474 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=105, state=RUNNABLE; CloseRegionProcedure 7c8bf056f56f1e8782cc13c568048ec5, server=jenkins-hbase4.apache.org,37679,1690171852273}] 2023-07-24 04:11:08,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-24 04:11:08,626 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:08,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7c8bf056f56f1e8782cc13c568048ec5, disabling compactions & flushes 2023-07-24 04:11:08,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. 2023-07-24 04:11:08,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. 2023-07-24 04:11:08,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. after waiting 0 ms 2023-07-24 04:11:08,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. 2023-07-24 04:11:08,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:11:08,635 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5. 2023-07-24 04:11:08,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7c8bf056f56f1e8782cc13c568048ec5: 2023-07-24 04:11:08,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:08,637 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=7c8bf056f56f1e8782cc13c568048ec5, regionState=CLOSED 2023-07-24 04:11:08,638 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690171868637"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171868637"}]},"ts":"1690171868637"} 2023-07-24 04:11:08,642 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=105 2023-07-24 04:11:08,642 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=105, state=SUCCESS; CloseRegionProcedure 7c8bf056f56f1e8782cc13c568048ec5, server=jenkins-hbase4.apache.org,37679,1690171852273 in 165 msec 2023-07-24 04:11:08,644 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-24 04:11:08,644 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=7c8bf056f56f1e8782cc13c568048ec5, UNASSIGN in 173 msec 2023-07-24 04:11:08,645 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171868645"}]},"ts":"1690171868645"} 2023-07-24 04:11:08,646 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLED in hbase:meta 2023-07-24 04:11:08,649 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLED 2023-07-24 04:11:08,651 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 190 msec 2023-07-24 04:11:08,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-24 04:11:08,775 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 104 completed 2023-07-24 04:11:08,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:08,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:08,780 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:08,781 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=107, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:08,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from rsgroup 'appInfo' 2023-07-24 04:11:08,786 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:08,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:08,789 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5/f, FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5/recovered.edits] 2023-07-24 04:11:08,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:08,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 04:11:08,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:08,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-24 04:11:08,798 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5/recovered.edits/4.seqid to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5/recovered.edits/4.seqid 2023-07-24 04:11:08,798 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/7c8bf056f56f1e8782cc13c568048ec5 2023-07-24 04:11:08,798 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-24 04:11:08,802 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=107, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:08,804 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_ns:testCreateWhenRsgroupNoOnlineServers from hbase:meta 2023-07-24 04:11:08,807 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' descriptor. 2023-07-24 04:11:08,809 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=107, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:08,809 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from region states. 2023-07-24 04:11:08,809 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690171868809"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:08,811 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 04:11:08,811 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 7c8bf056f56f1e8782cc13c568048ec5, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690171867846.7c8bf056f56f1e8782cc13c568048ec5.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 04:11:08,811 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_ns:testCreateWhenRsgroupNoOnlineServers' as deleted. 2023-07-24 04:11:08,812 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690171868812"}]},"ts":"9223372036854775807"} 2023-07-24 04:11:08,814 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_ns:testCreateWhenRsgroupNoOnlineServers state from META 2023-07-24 04:11:08,816 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=107, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:08,817 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 40 msec 2023-07-24 04:11:08,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-24 04:11:08,897 INFO [Listener at localhost/41307] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 107 completed 2023-07-24 04:11:08,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_ns 2023-07-24 04:11:08,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] procedure2.ProcedureExecutor(1029): Stored pid=108, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 04:11:08,904 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=108, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 04:11:08,907 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=108, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 04:11:08,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-24 04:11:08,910 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=108, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 04:11:08,911 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_ns 2023-07-24 04:11:08,911 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 04:11:08,912 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=108, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 04:11:08,914 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=108, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 04:11:08,915 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns in 13 msec 2023-07-24 04:11:09,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-24 04:11:09,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:09,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:09,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:09,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:09,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:09,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:09,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:09,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:09,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:09,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 04:11:09,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 04:11:09,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:09,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:09,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:09,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:09,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37679] to rsgroup default 2023-07-24 04:11:09,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:09,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 04:11:09,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:09,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-24 04:11:09,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37679,1690171852273] are moved back to appInfo 2023-07-24 04:11:09,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-24 04:11:09,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:09,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup appInfo 2023-07-24 04:11:09,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:09,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:09,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:09,035 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:09,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:09,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:09,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:09,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:09,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:09,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:09,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:09,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:11:09,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:09,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 699 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173069045, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:11:09,046 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:09,047 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:09,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:09,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:09,048 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:09,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:09,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:09,065 INFO [Listener at localhost/41307] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=520 (was 517) Potentially hanging thread: hconnection-0xe88b18-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe88b18-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe88b18-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe88b18-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1383649379_17 at /127.0.0.1:46490 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=792 (was 798), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=478 (was 511), ProcessCount=176 (was 176), AvailableMemoryMB=5993 (was 5987) - AvailableMemoryMB LEAK? - 2023-07-24 04:11:09,066 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-24 04:11:09,082 INFO [Listener at localhost/41307] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=520, OpenFileDescriptor=792, MaxFileDescriptor=60000, SystemLoadAverage=478, ProcessCount=176, AvailableMemoryMB=5993 2023-07-24 04:11:09,082 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-24 04:11:09,082 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(132): testBasicStartUp 2023-07-24 04:11:09,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:09,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:09,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:09,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:09,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:09,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:09,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:09,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:09,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:09,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:09,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:09,095 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:09,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:09,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:09,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:09,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:09,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:09,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:09,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:09,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:11:09,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:09,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 727 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173069104, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:11:09,105 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:09,106 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:09,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:09,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:09,107 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:09,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:09,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:09,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:09,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:09,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:09,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:09,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:09,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:09,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:09,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:09,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:09,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:09,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:09,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:09,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:09,123 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:09,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:09,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:09,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:09,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:09,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:09,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:09,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:09,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:11:09,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:09,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 757 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173069132, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:11:09,133 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:09,134 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:09,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:09,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:09,135 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:09,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:09,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:09,154 INFO [Listener at localhost/41307] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=521 (was 520) Potentially hanging thread: hconnection-0xe88b18-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=792 (was 792), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=478 (was 478), ProcessCount=176 (was 176), AvailableMemoryMB=5994 (was 5993) - AvailableMemoryMB LEAK? - 2023-07-24 04:11:09,154 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-24 04:11:09,169 INFO [Listener at localhost/41307] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=521, OpenFileDescriptor=792, MaxFileDescriptor=60000, SystemLoadAverage=478, ProcessCount=176, AvailableMemoryMB=5994 2023-07-24 04:11:09,170 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-24 04:11:09,170 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(132): testRSGroupsWithHBaseQuota 2023-07-24 04:11:09,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:09,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:09,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:09,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:09,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:09,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:09,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:09,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:09,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:09,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:09,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:09,186 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:09,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:09,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:09,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:09,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:09,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:09,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:09,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:09,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36883] to rsgroup master 2023-07-24 04:11:09,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:09,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] ipc.CallRunner(144): callId: 785 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38870 deadline: 1690173069195, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. 2023-07-24 04:11:09,196 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor62.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36883 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:09,197 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:09,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:09,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:09,198 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:39717, jenkins-hbase4.apache.org:41157, jenkins-hbase4.apache.org:43785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:09,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:09,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36883] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:09,199 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-24 04:11:09,199 INFO [Listener at localhost/41307] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 04:11:09,199 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x246ea770 to 127.0.0.1:59235 2023-07-24 04:11:09,199 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:09,199 DEBUG [Listener at localhost/41307] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 04:11:09,200 DEBUG [Listener at localhost/41307] util.JVMClusterUtil(257): Found active master hash=1539971133, stopped=false 2023-07-24 04:11:09,200 DEBUG [Listener at localhost/41307] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 04:11:09,200 DEBUG [Listener at localhost/41307] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 04:11:09,200 INFO [Listener at localhost/41307] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36883,1690171850269 2023-07-24 04:11:09,202 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:09,202 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:09,202 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:09,202 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:09,202 INFO [Listener at localhost/41307] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 04:11:09,203 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:09,203 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:09,202 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:09,202 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:11:09,203 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:09,203 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:09,203 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x271197f5 to 127.0.0.1:59235 2023-07-24 04:11:09,203 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:09,203 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:09,204 INFO [Listener at localhost/41307] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37679,1690171852273' ***** 2023-07-24 04:11:09,204 INFO [Listener at localhost/41307] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 04:11:09,204 INFO [Listener at localhost/41307] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41157,1690171852333' ***** 2023-07-24 04:11:09,204 INFO [Listener at localhost/41307] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 04:11:09,204 INFO [Listener at localhost/41307] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39717,1690171855814' ***** 2023-07-24 04:11:09,204 INFO [Listener at localhost/41307] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 04:11:09,204 INFO [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:09,204 INFO [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:09,204 INFO [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:09,204 INFO [Listener at localhost/41307] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43785,1690171856375' ***** 2023-07-24 04:11:09,205 INFO [Listener at localhost/41307] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 04:11:09,210 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:11:09,210 INFO [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:09,210 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:11:09,210 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:11:09,208 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:09,208 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:09,208 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:11:09,214 INFO [RS:3;jenkins-hbase4:39717] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@716625e4{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:09,214 INFO [RS:1;jenkins-hbase4:37679] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@41e9759c{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:09,214 INFO [RS:4;jenkins-hbase4:43785] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@49cb92f6{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:09,214 INFO [RS:2;jenkins-hbase4:41157] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@45df417c{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:09,214 INFO [RS:3;jenkins-hbase4:39717] server.AbstractConnector(383): Stopped ServerConnector@2be36392{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:09,214 INFO [RS:1;jenkins-hbase4:37679] server.AbstractConnector(383): Stopped ServerConnector@7eca90ad{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:09,214 INFO [RS:4;jenkins-hbase4:43785] server.AbstractConnector(383): Stopped ServerConnector@6b49ad69{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:09,214 INFO [RS:3;jenkins-hbase4:39717] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:09,215 INFO [RS:4;jenkins-hbase4:43785] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:09,214 INFO [RS:1;jenkins-hbase4:37679] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:09,216 INFO [RS:3;jenkins-hbase4:39717] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f9c119f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:09,214 INFO [RS:2;jenkins-hbase4:41157] server.AbstractConnector(383): Stopped ServerConnector@62eabc6f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:09,217 INFO [RS:1;jenkins-hbase4:37679] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@333febea{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:09,217 INFO [RS:3;jenkins-hbase4:39717] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2b0c8fbc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:09,217 INFO [RS:2;jenkins-hbase4:41157] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:09,217 INFO [RS:1;jenkins-hbase4:37679] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6544163a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:09,218 INFO [RS:2;jenkins-hbase4:41157] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@19d64a7a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:09,216 INFO [RS:4;jenkins-hbase4:43785] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@60676a38{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:09,219 INFO [RS:2;jenkins-hbase4:41157] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1da587c4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:09,220 INFO [RS:4;jenkins-hbase4:43785] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@71907284{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:09,220 INFO [RS:3;jenkins-hbase4:39717] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:11:09,220 INFO [RS:1;jenkins-hbase4:37679] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:11:09,220 INFO [RS:3;jenkins-hbase4:39717] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:11:09,220 INFO [RS:1;jenkins-hbase4:37679] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:11:09,220 INFO [RS:3;jenkins-hbase4:39717] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:11:09,220 INFO [RS:1;jenkins-hbase4:37679] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:11:09,220 INFO [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:11:09,220 INFO [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:09,220 DEBUG [RS:3;jenkins-hbase4:39717] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x694ffec5 to 127.0.0.1:59235 2023-07-24 04:11:09,221 DEBUG [RS:1;jenkins-hbase4:37679] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7ddd3abf to 127.0.0.1:59235 2023-07-24 04:11:09,221 DEBUG [RS:3;jenkins-hbase4:39717] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:09,221 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:09,221 INFO [RS:3;jenkins-hbase4:39717] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:11:09,221 INFO [RS:2;jenkins-hbase4:41157] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:11:09,221 INFO [RS:2;jenkins-hbase4:41157] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:11:09,221 INFO [RS:2;jenkins-hbase4:41157] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:11:09,221 INFO [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(3305): Received CLOSE for 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:09,222 INFO [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(3305): Received CLOSE for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:09,222 INFO [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:09,222 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6aa1ab126d58dcf7d835257119c9304f, disabling compactions & flushes 2023-07-24 04:11:09,221 DEBUG [RS:1;jenkins-hbase4:37679] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:09,222 DEBUG [RS:2;jenkins-hbase4:41157] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7779fbd0 to 127.0.0.1:59235 2023-07-24 04:11:09,221 INFO [RS:3;jenkins-hbase4:39717] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:11:09,221 INFO [RS:4;jenkins-hbase4:43785] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:11:09,223 INFO [RS:3;jenkins-hbase4:39717] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:11:09,223 DEBUG [RS:2;jenkins-hbase4:41157] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:09,223 INFO [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 04:11:09,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:09,222 INFO [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37679,1690171852273; all regions closed. 2023-07-24 04:11:09,223 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:09,223 INFO [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-24 04:11:09,223 INFO [RS:4;jenkins-hbase4:43785] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:11:09,223 DEBUG [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1478): Online Regions={6aa1ab126d58dcf7d835257119c9304f=hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f., 73e1052e9bc949a33667944e6caa42b4=hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.} 2023-07-24 04:11:09,224 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 04:11:09,223 INFO [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 04:11:09,224 DEBUG [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-24 04:11:09,223 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. after waiting 0 ms 2023-07-24 04:11:09,224 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 04:11:09,223 INFO [RS:4;jenkins-hbase4:43785] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:11:09,224 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 04:11:09,224 INFO [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:09,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:09,224 DEBUG [RS:4;jenkins-hbase4:43785] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5f30c910 to 127.0.0.1:59235 2023-07-24 04:11:09,224 DEBUG [RS:4;jenkins-hbase4:43785] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:09,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 6aa1ab126d58dcf7d835257119c9304f 1/1 column families, dataSize=16.53 KB heapSize=26.77 KB 2023-07-24 04:11:09,224 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 04:11:09,224 INFO [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43785,1690171856375; all regions closed. 2023-07-24 04:11:09,224 DEBUG [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1504): Waiting on 6aa1ab126d58dcf7d835257119c9304f, 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:09,225 DEBUG [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 04:11:09,225 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 04:11:09,225 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=14.11 KB heapSize=23.67 KB 2023-07-24 04:11:09,234 DEBUG [RS:4;jenkins-hbase4:43785] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:09,234 INFO [RS:4;jenkins-hbase4:43785] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43785%2C1690171856375:(num 1690171856647) 2023-07-24 04:11:09,234 DEBUG [RS:1;jenkins-hbase4:37679] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:09,234 DEBUG [RS:4;jenkins-hbase4:43785] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:09,234 INFO [RS:1;jenkins-hbase4:37679] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37679%2C1690171852273.meta:.meta(num 1690171854552) 2023-07-24 04:11:09,234 INFO [RS:4;jenkins-hbase4:43785] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:09,235 INFO [RS:4;jenkins-hbase4:43785] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:09,235 INFO [RS:4;jenkins-hbase4:43785] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:11:09,235 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:09,235 INFO [RS:4;jenkins-hbase4:43785] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:11:09,236 INFO [RS:4;jenkins-hbase4:43785] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:11:09,237 INFO [RS:4;jenkins-hbase4:43785] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43785 2023-07-24 04:11:09,240 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 04:11:09,241 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 04:11:09,242 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 04:11:09,242 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 04:11:09,248 DEBUG [RS:1;jenkins-hbase4:37679] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:09,248 INFO [RS:1;jenkins-hbase4:37679] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37679%2C1690171852273:(num 1690171854413) 2023-07-24 04:11:09,248 DEBUG [RS:1;jenkins-hbase4:37679] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:09,248 INFO [RS:1;jenkins-hbase4:37679] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:09,249 INFO [RS:1;jenkins-hbase4:37679] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:09,249 INFO [RS:1;jenkins-hbase4:37679] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:11:09,249 INFO [RS:1;jenkins-hbase4:37679] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:11:09,249 INFO [RS:1;jenkins-hbase4:37679] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:11:09,249 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:09,251 INFO [RS:1;jenkins-hbase4:37679] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37679 2023-07-24 04:11:09,253 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.52 KB at sequenceid=140 (bloomFilter=false), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/info/35ed1307fbf449eea8d4667880d2c6b7 2023-07-24 04:11:09,254 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=16.53 KB at sequenceid=67 (bloomFilter=true), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/.tmp/m/ef83161486a0423dbe81bde050027796 2023-07-24 04:11:09,260 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 35ed1307fbf449eea8d4667880d2c6b7 2023-07-24 04:11:09,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef83161486a0423dbe81bde050027796 2023-07-24 04:11:09,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/.tmp/m/ef83161486a0423dbe81bde050027796 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m/ef83161486a0423dbe81bde050027796 2023-07-24 04:11:09,269 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef83161486a0423dbe81bde050027796 2023-07-24 04:11:09,270 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m/ef83161486a0423dbe81bde050027796, entries=21, sequenceid=67, filesize=5.7 K 2023-07-24 04:11:09,271 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~16.53 KB/16928, heapSize ~26.76 KB/27400, currentSize=0 B/0 for 6aa1ab126d58dcf7d835257119c9304f in 47ms, sequenceid=67, compaction requested=false 2023-07-24 04:11:09,275 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=555 B at sequenceid=140 (bloomFilter=false), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/rep_barrier/d38645e5353e4b70a81221f90b832aa9 2023-07-24 04:11:09,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/recovered.edits/70.seqid, newMaxSeqId=70, maxSeqId=1 2023-07-24 04:11:09,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 04:11:09,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:09,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6aa1ab126d58dcf7d835257119c9304f: 2023-07-24 04:11:09,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:09,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 73e1052e9bc949a33667944e6caa42b4, disabling compactions & flushes 2023-07-24 04:11:09,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:09,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:09,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. after waiting 0 ms 2023-07-24 04:11:09,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:09,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 73e1052e9bc949a33667944e6caa42b4 1/1 column families, dataSize=365 B heapSize=1.13 KB 2023-07-24 04:11:09,283 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d38645e5353e4b70a81221f90b832aa9 2023-07-24 04:11:09,291 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=365 B at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/.tmp/info/c6c036af6e624012bca44b5797bc2af2 2023-07-24 04:11:09,294 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.04 KB at sequenceid=140 (bloomFilter=false), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/table/763c92d71bbe40558f4f7141fc340072 2023-07-24 04:11:09,298 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6c036af6e624012bca44b5797bc2af2 2023-07-24 04:11:09,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/.tmp/info/c6c036af6e624012bca44b5797bc2af2 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info/c6c036af6e624012bca44b5797bc2af2 2023-07-24 04:11:09,299 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 763c92d71bbe40558f4f7141fc340072 2023-07-24 04:11:09,300 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/info/35ed1307fbf449eea8d4667880d2c6b7 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/35ed1307fbf449eea8d4667880d2c6b7 2023-07-24 04:11:09,304 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6c036af6e624012bca44b5797bc2af2 2023-07-24 04:11:09,304 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info/c6c036af6e624012bca44b5797bc2af2, entries=5, sequenceid=11, filesize=5.1 K 2023-07-24 04:11:09,305 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~365 B/365, heapSize ~1.11 KB/1136, currentSize=0 B/0 for 73e1052e9bc949a33667944e6caa42b4 in 25ms, sequenceid=11, compaction requested=false 2023-07-24 04:11:09,307 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:09,308 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 35ed1307fbf449eea8d4667880d2c6b7 2023-07-24 04:11:09,308 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/35ed1307fbf449eea8d4667880d2c6b7, entries=10, sequenceid=140, filesize=5.9 K 2023-07-24 04:11:09,310 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/rep_barrier/d38645e5353e4b70a81221f90b832aa9 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier/d38645e5353e4b70a81221f90b832aa9 2023-07-24 04:11:09,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/recovered.edits/14.seqid, newMaxSeqId=14, maxSeqId=1 2023-07-24 04:11:09,313 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:09,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 73e1052e9bc949a33667944e6caa42b4: 2023-07-24 04:11:09,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:09,317 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d38645e5353e4b70a81221f90b832aa9 2023-07-24 04:11:09,317 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier/d38645e5353e4b70a81221f90b832aa9, entries=5, sequenceid=140, filesize=5.5 K 2023-07-24 04:11:09,318 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/table/763c92d71bbe40558f4f7141fc340072 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/763c92d71bbe40558f4f7141fc340072 2023-07-24 04:11:09,324 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 763c92d71bbe40558f4f7141fc340072 2023-07-24 04:11:09,324 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/763c92d71bbe40558f4f7141fc340072, entries=10, sequenceid=140, filesize=5.7 K 2023-07-24 04:11:09,326 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~14.11 KB/14445, heapSize ~23.63 KB/24192, currentSize=0 B/0 for 1588230740 in 101ms, sequenceid=140, compaction requested=false 2023-07-24 04:11:09,326 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 04:11:09,326 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:09,326 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:09,326 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:09,326 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:09,327 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:09,326 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37679,1690171852273 2023-07-24 04:11:09,327 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:09,326 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:09,327 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:09,327 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:09,327 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:09,327 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:09,327 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43785,1690171856375 2023-07-24 04:11:09,329 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43785,1690171856375] 2023-07-24 04:11:09,329 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43785,1690171856375; numProcessing=1 2023-07-24 04:11:09,334 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43785,1690171856375 already deleted, retry=false 2023-07-24 04:11:09,334 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43785,1690171856375 expired; onlineServers=3 2023-07-24 04:11:09,334 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37679,1690171852273] 2023-07-24 04:11:09,335 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37679,1690171852273; numProcessing=2 2023-07-24 04:11:09,337 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37679,1690171852273 already deleted, retry=false 2023-07-24 04:11:09,337 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37679,1690171852273 expired; onlineServers=2 2023-07-24 04:11:09,340 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/recovered.edits/143.seqid, newMaxSeqId=143, maxSeqId=77 2023-07-24 04:11:09,340 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 04:11:09,341 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 04:11:09,341 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 04:11:09,341 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 04:11:09,425 INFO [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41157,1690171852333; all regions closed. 2023-07-24 04:11:09,425 INFO [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39717,1690171855814; all regions closed. 2023-07-24 04:11:09,433 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,41157,1690171852333/jenkins-hbase4.apache.org%2C41157%2C1690171852333.1690171854413 not finished, retry = 0 2023-07-24 04:11:09,435 DEBUG [RS:3;jenkins-hbase4:39717] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:09,435 INFO [RS:3;jenkins-hbase4:39717] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39717%2C1690171855814.meta:.meta(num 1690171861041) 2023-07-24 04:11:09,441 DEBUG [RS:3;jenkins-hbase4:39717] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:09,441 INFO [RS:3;jenkins-hbase4:39717] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39717%2C1690171855814:(num 1690171856079) 2023-07-24 04:11:09,441 DEBUG [RS:3;jenkins-hbase4:39717] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:09,441 INFO [RS:3;jenkins-hbase4:39717] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:09,442 INFO [RS:3;jenkins-hbase4:39717] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:09,442 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:09,443 INFO [RS:3;jenkins-hbase4:39717] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39717 2023-07-24 04:11:09,444 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:11:09,444 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:09,444 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39717,1690171855814 2023-07-24 04:11:09,446 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39717,1690171855814] 2023-07-24 04:11:09,446 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39717,1690171855814; numProcessing=3 2023-07-24 04:11:09,448 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39717,1690171855814 already deleted, retry=false 2023-07-24 04:11:09,449 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39717,1690171855814 expired; onlineServers=1 2023-07-24 04:11:09,536 DEBUG [RS:2;jenkins-hbase4:41157] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:09,536 INFO [RS:2;jenkins-hbase4:41157] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41157%2C1690171852333:(num 1690171854413) 2023-07-24 04:11:09,536 DEBUG [RS:2;jenkins-hbase4:41157] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:09,536 INFO [RS:2;jenkins-hbase4:41157] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:09,536 INFO [RS:2;jenkins-hbase4:41157] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:09,536 INFO [RS:2;jenkins-hbase4:41157] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:11:09,536 INFO [RS:2;jenkins-hbase4:41157] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:11:09,536 INFO [RS:2;jenkins-hbase4:41157] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:11:09,536 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:09,537 INFO [RS:2;jenkins-hbase4:41157] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41157 2023-07-24 04:11:09,539 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41157,1690171852333 2023-07-24 04:11:09,539 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:09,540 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41157,1690171852333] 2023-07-24 04:11:09,540 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41157,1690171852333; numProcessing=4 2023-07-24 04:11:09,541 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41157,1690171852333 already deleted, retry=false 2023-07-24 04:11:09,541 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41157,1690171852333 expired; onlineServers=0 2023-07-24 04:11:09,541 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36883,1690171850269' ***** 2023-07-24 04:11:09,541 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 04:11:09,542 DEBUG [M:0;jenkins-hbase4:36883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1af71206, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:11:09,542 INFO [M:0;jenkins-hbase4:36883] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:09,544 INFO [M:0;jenkins-hbase4:36883] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@407d85db{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 04:11:09,545 INFO [M:0;jenkins-hbase4:36883] server.AbstractConnector(383): Stopped ServerConnector@7d776eb6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:09,545 INFO [M:0;jenkins-hbase4:36883] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:09,545 INFO [M:0;jenkins-hbase4:36883] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b062a14{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:09,546 INFO [M:0;jenkins-hbase4:36883] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@78f85e9a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:09,546 INFO [M:0;jenkins-hbase4:36883] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36883,1690171850269 2023-07-24 04:11:09,546 INFO [M:0;jenkins-hbase4:36883] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36883,1690171850269; all regions closed. 2023-07-24 04:11:09,546 DEBUG [M:0;jenkins-hbase4:36883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:09,546 INFO [M:0;jenkins-hbase4:36883] master.HMaster(1491): Stopping master jetty server 2023-07-24 04:11:09,547 INFO [M:0;jenkins-hbase4:36883] server.AbstractConnector(383): Stopped ServerConnector@406a344{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:09,547 DEBUG [M:0;jenkins-hbase4:36883] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 04:11:09,547 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 04:11:09,547 DEBUG [M:0;jenkins-hbase4:36883] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 04:11:09,547 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690171853988] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690171853988,5,FailOnTimeoutGroup] 2023-07-24 04:11:09,547 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690171853987] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690171853987,5,FailOnTimeoutGroup] 2023-07-24 04:11:09,547 INFO [M:0;jenkins-hbase4:36883] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 04:11:09,547 INFO [M:0;jenkins-hbase4:36883] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 04:11:09,547 INFO [M:0;jenkins-hbase4:36883] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 04:11:09,548 DEBUG [M:0;jenkins-hbase4:36883] master.HMaster(1512): Stopping service threads 2023-07-24 04:11:09,548 INFO [M:0;jenkins-hbase4:36883] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 04:11:09,548 ERROR [M:0;jenkins-hbase4:36883] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-24 04:11:09,548 INFO [M:0;jenkins-hbase4:36883] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 04:11:09,549 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 04:11:09,550 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 04:11:09,550 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:11:09,550 DEBUG [M:0;jenkins-hbase4:36883] zookeeper.ZKUtil(398): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 04:11:09,550 WARN [M:0;jenkins-hbase4:36883] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 04:11:09,550 INFO [M:0;jenkins-hbase4:36883] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 04:11:09,550 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:11:09,550 INFO [M:0;jenkins-hbase4:36883] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 04:11:09,550 DEBUG [M:0;jenkins-hbase4:36883] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 04:11:09,550 INFO [M:0;jenkins-hbase4:36883] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:11:09,551 DEBUG [M:0;jenkins-hbase4:36883] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:11:09,551 DEBUG [M:0;jenkins-hbase4:36883] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 04:11:09,551 DEBUG [M:0;jenkins-hbase4:36883] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:11:09,551 INFO [M:0;jenkins-hbase4:36883] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=363.72 KB heapSize=433.36 KB 2023-07-24 04:11:09,575 INFO [M:0;jenkins-hbase4:36883] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=363.72 KB at sequenceid=796 (bloomFilter=true), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2ec2cd2579e74f859cf29716c9c6d781 2023-07-24 04:11:09,584 DEBUG [M:0;jenkins-hbase4:36883] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2ec2cd2579e74f859cf29716c9c6d781 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2ec2cd2579e74f859cf29716c9c6d781 2023-07-24 04:11:09,589 INFO [M:0;jenkins-hbase4:36883] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2ec2cd2579e74f859cf29716c9c6d781, entries=108, sequenceid=796, filesize=25.2 K 2023-07-24 04:11:09,591 INFO [M:0;jenkins-hbase4:36883] regionserver.HRegion(2948): Finished flush of dataSize ~363.72 KB/372447, heapSize ~433.34 KB/443744, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 39ms, sequenceid=796, compaction requested=false 2023-07-24 04:11:09,595 INFO [M:0;jenkins-hbase4:36883] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:11:09,595 DEBUG [M:0;jenkins-hbase4:36883] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 04:11:09,599 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:09,599 INFO [M:0;jenkins-hbase4:36883] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 04:11:09,600 INFO [M:0;jenkins-hbase4:36883] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36883 2023-07-24 04:11:09,602 DEBUG [M:0;jenkins-hbase4:36883] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36883,1690171850269 already deleted, retry=false 2023-07-24 04:11:09,802 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:09,802 INFO [M:0;jenkins-hbase4:36883] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36883,1690171850269; zookeeper connection closed. 2023-07-24 04:11:09,802 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:36883-0x10195863d980000, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:09,902 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:09,902 INFO [RS:2;jenkins-hbase4:41157] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41157,1690171852333; zookeeper connection closed. 2023-07-24 04:11:09,902 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41157-0x10195863d980003, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:09,903 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@71233650] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@71233650 2023-07-24 04:11:10,002 INFO [RS:3;jenkins-hbase4:39717] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39717,1690171855814; zookeeper connection closed. 2023-07-24 04:11:10,002 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:10,003 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:39717-0x10195863d98000b, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:10,003 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2832d2a3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2832d2a3 2023-07-24 04:11:10,103 INFO [RS:4;jenkins-hbase4:43785] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43785,1690171856375; zookeeper connection closed. 2023-07-24 04:11:10,103 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:10,103 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43785-0x10195863d98000d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:10,103 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@395c25eb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@395c25eb 2023-07-24 04:11:10,203 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:10,203 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x10195863d980002, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:10,203 INFO [RS:1;jenkins-hbase4:37679] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37679,1690171852273; zookeeper connection closed. 2023-07-24 04:11:10,203 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@9efee90] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@9efee90 2023-07-24 04:11:10,204 INFO [Listener at localhost/41307] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-24 04:11:10,204 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-24 04:11:10,451 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 04:11:10,451 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 04:11:10,451 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 04:11:11,926 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 04:11:12,205 DEBUG [Listener at localhost/41307] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 04:11:12,205 DEBUG [Listener at localhost/41307] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 04:11:12,205 DEBUG [Listener at localhost/41307] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 04:11:12,205 DEBUG [Listener at localhost/41307] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 04:11:12,206 INFO [Listener at localhost/41307] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:11:12,206 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:12,206 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:12,206 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:11:12,206 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:12,206 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:11:12,206 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:11:12,207 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40563 2023-07-24 04:11:12,208 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:12,210 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:12,211 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40563 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:11:12,217 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:405630x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:11:12,218 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40563-0x10195863d980010 connected 2023-07-24 04:11:12,220 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:11:12,221 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:12,222 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:11:12,222 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40563 2023-07-24 04:11:12,223 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40563 2023-07-24 04:11:12,227 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40563 2023-07-24 04:11:12,228 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40563 2023-07-24 04:11:12,228 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40563 2023-07-24 04:11:12,230 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:11:12,230 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:11:12,231 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:11:12,231 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 04:11:12,231 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:11:12,231 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:11:12,231 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:11:12,232 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 45411 2023-07-24 04:11:12,233 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:11:12,239 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:12,239 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7fea07d1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:11:12,239 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:12,240 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3651d084{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:11:12,249 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:11:12,250 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:11:12,250 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:11:12,251 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 04:11:12,253 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:12,254 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4c0e136e{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 04:11:12,255 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@1f7713a7{HTTP/1.1, (http/1.1)}{0.0.0.0:45411} 2023-07-24 04:11:12,256 INFO [Listener at localhost/41307] server.Server(415): Started @28555ms 2023-07-24 04:11:12,256 INFO [Listener at localhost/41307] master.HMaster(444): hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca, hbase.cluster.distributed=false 2023-07-24 04:11:12,265 DEBUG [pool-349-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-24 04:11:12,278 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:11:12,279 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:12,279 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:12,279 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:11:12,279 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:12,279 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:11:12,280 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:11:12,283 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46039 2023-07-24 04:11:12,283 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:11:12,328 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:11:12,329 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:12,330 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:12,331 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46039 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:11:12,339 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:460390x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:11:12,340 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:460390x0, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:11:12,341 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46039-0x10195863d980011 connected 2023-07-24 04:11:12,341 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:12,342 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:11:12,345 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46039 2023-07-24 04:11:12,345 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46039 2023-07-24 04:11:12,346 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46039 2023-07-24 04:11:12,347 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46039 2023-07-24 04:11:12,347 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46039 2023-07-24 04:11:12,350 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:11:12,350 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:11:12,351 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:11:12,351 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:11:12,352 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:11:12,352 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:11:12,352 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:11:12,353 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 46751 2023-07-24 04:11:12,353 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:11:12,367 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:12,367 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5421514{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:11:12,368 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:12,368 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4622d84c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:11:12,376 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:11:12,377 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:11:12,377 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:11:12,378 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 04:11:12,379 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:12,380 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@53d03153{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:12,381 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@827b832{HTTP/1.1, (http/1.1)}{0.0.0.0:46751} 2023-07-24 04:11:12,381 INFO [Listener at localhost/41307] server.Server(415): Started @28680ms 2023-07-24 04:11:12,392 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:11:12,392 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:12,393 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:12,393 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:11:12,393 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:12,393 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:11:12,393 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:11:12,394 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43611 2023-07-24 04:11:12,394 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:11:12,395 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:11:12,395 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:12,396 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:12,397 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43611 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:11:12,405 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:436110x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:11:12,407 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43611-0x10195863d980012 connected 2023-07-24 04:11:12,407 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:11:12,407 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:12,408 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:11:12,409 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43611 2023-07-24 04:11:12,409 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43611 2023-07-24 04:11:12,409 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43611 2023-07-24 04:11:12,410 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43611 2023-07-24 04:11:12,410 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43611 2023-07-24 04:11:12,412 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:11:12,412 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:11:12,412 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:11:12,412 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:11:12,413 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:11:12,413 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:11:12,413 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:11:12,413 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 44741 2023-07-24 04:11:12,413 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:11:12,415 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:12,415 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@176acb35{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:11:12,415 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:12,415 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@42785be3{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:11:12,421 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:11:12,422 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:11:12,422 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:11:12,423 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 04:11:12,424 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:12,425 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7a33fea8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:12,431 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@7125e0b4{HTTP/1.1, (http/1.1)}{0.0.0.0:44741} 2023-07-24 04:11:12,431 INFO [Listener at localhost/41307] server.Server(415): Started @28730ms 2023-07-24 04:11:12,445 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:11:12,445 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:12,445 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:12,445 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:11:12,445 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:12,445 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:11:12,445 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:11:12,446 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43857 2023-07-24 04:11:12,446 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:11:12,447 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:11:12,448 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:12,449 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:12,450 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43857 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:11:12,456 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:438570x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:11:12,457 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:438570x0, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:11:12,461 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43857-0x10195863d980013 connected 2023-07-24 04:11:12,461 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:12,462 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:11:12,467 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43857 2023-07-24 04:11:12,467 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43857 2023-07-24 04:11:12,467 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43857 2023-07-24 04:11:12,468 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43857 2023-07-24 04:11:12,468 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43857 2023-07-24 04:11:12,470 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:11:12,470 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:11:12,470 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:11:12,471 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:11:12,471 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:11:12,471 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:11:12,471 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:11:12,472 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 45381 2023-07-24 04:11:12,472 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:11:12,473 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:12,473 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4ffd85df{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:11:12,473 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:12,473 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@49da996{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:11:12,479 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:11:12,480 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:11:12,480 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:11:12,480 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 04:11:12,481 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:12,482 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2d4eff7e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:12,484 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@73212ba8{HTTP/1.1, (http/1.1)}{0.0.0.0:45381} 2023-07-24 04:11:12,484 INFO [Listener at localhost/41307] server.Server(415): Started @28783ms 2023-07-24 04:11:12,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:11:12,493 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@59439992{HTTP/1.1, (http/1.1)}{0.0.0.0:46805} 2023-07-24 04:11:12,493 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @28792ms 2023-07-24 04:11:12,493 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:12,495 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 04:11:12,495 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:12,496 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 04:11:12,496 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 04:11:12,496 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 04:11:12,496 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 04:11:12,497 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:11:12,501 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 04:11:12,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40563,1690171872205 from backup master directory 2023-07-24 04:11:12,503 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 04:11:12,505 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:12,505 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 04:11:12,505 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:11:12,505 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:12,525 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:12,567 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x75853b23 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:12,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@59406874, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:12,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 04:11:12,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 04:11:12,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:12,582 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,36883,1690171850269 to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,36883,1690171850269-dead as it is dead 2023-07-24 04:11:12,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,36883,1690171850269-dead/jenkins-hbase4.apache.org%2C36883%2C1690171850269.1690171852948 2023-07-24 04:11:12,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,36883,1690171850269-dead/jenkins-hbase4.apache.org%2C36883%2C1690171850269.1690171852948 after 4ms 2023-07-24 04:11:12,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,36883,1690171850269-dead/jenkins-hbase4.apache.org%2C36883%2C1690171850269.1690171852948 to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C36883%2C1690171850269.1690171852948 2023-07-24 04:11:12,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,36883,1690171850269-dead 2023-07-24 04:11:12,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:12,591 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40563%2C1690171872205, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,40563,1690171872205, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/oldWALs, maxLogs=10 2023-07-24 04:11:12,606 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:11:12,607 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:11:12,607 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:11:12,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,40563,1690171872205/jenkins-hbase4.apache.org%2C40563%2C1690171872205.1690171872592 2023-07-24 04:11:12,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK]] 2023-07-24 04:11:12,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:12,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:12,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:11:12,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:11:12,615 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:11:12,616 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 04:11:12,617 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 04:11:12,623 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2ec2cd2579e74f859cf29716c9c6d781 2023-07-24 04:11:12,623 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:12,624 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-24 04:11:12,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C36883%2C1690171850269.1690171852948 2023-07-24 04:11:12,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 939, firstSequenceIdInLog=3, maxSequenceIdInLog=798, path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C36883%2C1690171850269.1690171852948 2023-07-24 04:11:12,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C36883%2C1690171850269.1690171852948 2023-07-24 04:11:12,659 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:11:12,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/798.seqid, newMaxSeqId=798, maxSeqId=1 2023-07-24 04:11:12,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=799; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11594283840, jitterRate=0.07980182766914368}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:11:12,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 04:11:12,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 04:11:12,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 04:11:12,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 04:11:12,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 04:11:12,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 04:11:12,674 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-24 04:11:12,674 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-24 04:11:12,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-24 04:11:12,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-24 04:11:12,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-24 04:11:12,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,36109,1690171852137, splitWal=true, meta=false 2023-07-24 04:11:12,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=13, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-24 04:11:12,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=14, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:11:12,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=17, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:11:12,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=20, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:11:12,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=21, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:11:12,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=42, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:11:12,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=63, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:11:12,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=64, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 04:11:12,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=67, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-24 04:11:12,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=68, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:12,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=71, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:12,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=74, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:12,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=75, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 04:11:12,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=76, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:12,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=79, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:12,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=82, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:12,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=83, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:12,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=86, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-24 04:11:12,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=87, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-24 04:11:12,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=88, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690171864804 type: FLUSH version: 2 ttl: 0 ) 2023-07-24 04:11:12,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=91, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:12,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=94, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:12,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=95, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 04:11:12,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=98, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 04:11:12,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=99, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-24 04:11:12,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=100, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:12,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=101, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:12,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=104, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:12,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:12,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=108, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 04:11:12,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 18 msec 2023-07-24 04:11:12,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 04:11:12,683 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-24 04:11:12,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase4.apache.org,39717,1690171855814, table=hbase:meta, region=1588230740 2023-07-24 04:11:12,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 4 possibly 'live' servers, and 0 'splitting'. 2023-07-24 04:11:12,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41157,1690171852333 already deleted, retry=false 2023-07-24 04:11:12,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,41157,1690171852333 on jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:12,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,41157,1690171852333, splitWal=true, meta=false 2023-07-24 04:11:12,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=109 for jenkins-hbase4.apache.org,41157,1690171852333 (carryingMeta=false) jenkins-hbase4.apache.org,41157,1690171852333/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@290ba68c[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 04:11:12,692 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39717,1690171855814 already deleted, retry=false 2023-07-24 04:11:12,692 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,39717,1690171855814 on jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:12,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,39717,1690171855814, splitWal=true, meta=true 2023-07-24 04:11:12,693 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=110 for jenkins-hbase4.apache.org,39717,1690171855814 (carryingMeta=true) jenkins-hbase4.apache.org,39717,1690171855814/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@4eecdb25[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 04:11:12,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43785,1690171856375 already deleted, retry=false 2023-07-24 04:11:12,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,43785,1690171856375 on jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:12,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,43785,1690171856375, splitWal=true, meta=false 2023-07-24 04:11:12,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=111 for jenkins-hbase4.apache.org,43785,1690171856375 (carryingMeta=false) jenkins-hbase4.apache.org,43785,1690171856375/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@5a9624bc[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 04:11:12,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37679,1690171852273 already deleted, retry=false 2023-07-24 04:11:12,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,37679,1690171852273 on jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:12,698 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,37679,1690171852273, splitWal=true, meta=false 2023-07-24 04:11:12,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=112 for jenkins-hbase4.apache.org,37679,1690171852273 (carryingMeta=false) jenkins-hbase4.apache.org,37679,1690171852273/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@284bea92[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 04:11:12,698 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-24 04:11:12,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 04:11:12,699 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 04:11:12,700 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 04:11:12,700 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 04:11:12,701 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 04:11:12,703 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:12,703 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:12,703 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:12,703 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:12,703 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:11:12,706 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40563,1690171872205, sessionid=0x10195863d980010, setting cluster-up flag (Was=false) 2023-07-24 04:11:12,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 04:11:12,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:12,725 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 04:11:12,726 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:12,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 04:11:12,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 04:11:12,731 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-24 04:11:12,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 04:11:12,732 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:11:12,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 04:11:12,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-24 04:11:12,736 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:12,737 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:39717 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:39717 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:11:12,739 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:39717 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:39717 2023-07-24 04:11:12,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 04:11:12,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 04:11:12,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 04:11:12,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 04:11:12,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 04:11:12,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 04:11:12,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 04:11:12,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 04:11:12,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 04:11:12,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:11:12,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,750 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690171902750 2023-07-24 04:11:12,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 04:11:12,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 04:11:12,754 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 04:11:12,754 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 04:11:12,754 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 04:11:12,754 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 04:11:12,754 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,754 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 04:11:12,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 04:11:12,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 04:11:12,756 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41157,1690171852333; numProcessing=1 2023-07-24 04:11:12,756 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=109, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,41157,1690171852333, splitWal=true, meta=false 2023-07-24 04:11:12,756 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39717,1690171855814; numProcessing=2 2023-07-24 04:11:12,756 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=110, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,39717,1690171855814, splitWal=true, meta=true 2023-07-24 04:11:12,756 DEBUG [PEWorker-3] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43785,1690171856375; numProcessing=3 2023-07-24 04:11:12,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 04:11:12,756 DEBUG [PEWorker-5] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37679,1690171852273; numProcessing=4 2023-07-24 04:11:12,756 INFO [PEWorker-5] procedure.ServerCrashProcedure(161): Start pid=112, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,37679,1690171852273, splitWal=true, meta=false 2023-07-24 04:11:12,756 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=111, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,43785,1690171856375, splitWal=true, meta=false 2023-07-24 04:11:12,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 04:11:12,757 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690171872757,5,FailOnTimeoutGroup] 2023-07-24 04:11:12,757 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690171872757,5,FailOnTimeoutGroup] 2023-07-24 04:11:12,757 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,757 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 04:11:12,757 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,757 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,757 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690171872757, completionTime=-1 2023-07-24 04:11:12,757 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-24 04:11:12,757 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-24 04:11:12,767 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=110, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,39717,1690171855814, splitWal=true, meta=true, isMeta: true 2023-07-24 04:11:12,772 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,39717,1690171855814-splitting 2023-07-24 04:11:12,773 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,39717,1690171855814-splitting dir is empty, no logs to split. 2023-07-24 04:11:12,773 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,39717,1690171855814 WAL count=0, meta=true 2023-07-24 04:11:12,776 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,39717,1690171855814-splitting dir is empty, no logs to split. 2023-07-24 04:11:12,776 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,39717,1690171855814 WAL count=0, meta=true 2023-07-24 04:11:12,776 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,39717,1690171855814 WAL splitting is done? wals=0, meta=true 2023-07-24 04:11:12,777 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 04:11:12,784 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=113, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 04:11:12,785 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=113, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 04:11:12,787 INFO [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:11:12,788 DEBUG [RS:1;jenkins-hbase4:43611] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:11:12,788 INFO [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:11:12,788 INFO [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:11:12,789 DEBUG [RS:2;jenkins-hbase4:43857] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:11:12,790 DEBUG [RS:0;jenkins-hbase4:46039] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:11:12,792 DEBUG [RS:1;jenkins-hbase4:43611] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:11:12,792 DEBUG [RS:2;jenkins-hbase4:43857] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:11:12,792 DEBUG [RS:2;jenkins-hbase4:43857] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:11:12,792 DEBUG [RS:1;jenkins-hbase4:43611] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:11:12,792 DEBUG [RS:0;jenkins-hbase4:46039] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:11:12,793 DEBUG [RS:0;jenkins-hbase4:46039] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:11:12,795 DEBUG [RS:2;jenkins-hbase4:43857] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:11:12,796 DEBUG [RS:0;jenkins-hbase4:46039] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:11:12,797 DEBUG [RS:1;jenkins-hbase4:43611] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:11:12,797 DEBUG [RS:2;jenkins-hbase4:43857] zookeeper.ReadOnlyZKClient(139): Connect 0x39c18581 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:12,797 DEBUG [RS:0;jenkins-hbase4:46039] zookeeper.ReadOnlyZKClient(139): Connect 0x0dbea709 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:12,798 DEBUG [RS:1;jenkins-hbase4:43611] zookeeper.ReadOnlyZKClient(139): Connect 0x5b7c88f8 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:12,805 DEBUG [RS:1;jenkins-hbase4:43611] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55bb7de7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:12,805 DEBUG [RS:2;jenkins-hbase4:43857] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37067cc9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:12,805 DEBUG [RS:1;jenkins-hbase4:43611] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5184e388, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:11:12,805 DEBUG [RS:0;jenkins-hbase4:46039] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37958ce, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:12,805 DEBUG [RS:2;jenkins-hbase4:43857] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3339eef1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:11:12,806 DEBUG [RS:0;jenkins-hbase4:46039] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@746cc79, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:11:12,814 DEBUG [RS:2;jenkins-hbase4:43857] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:43857 2023-07-24 04:11:12,814 INFO [RS:2;jenkins-hbase4:43857] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:11:12,814 INFO [RS:2;jenkins-hbase4:43857] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:11:12,814 DEBUG [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:11:12,815 INFO [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40563,1690171872205 with isa=jenkins-hbase4.apache.org/172.31.14.131:43857, startcode=1690171872444 2023-07-24 04:11:12,815 DEBUG [RS:2;jenkins-hbase4:43857] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:11:12,817 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37013, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:11:12,818 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40563] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:12,818 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:11:12,819 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 04:11:12,819 DEBUG [RS:1;jenkins-hbase4:43611] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:43611 2023-07-24 04:11:12,819 DEBUG [RS:0;jenkins-hbase4:46039] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46039 2023-07-24 04:11:12,819 INFO [RS:1;jenkins-hbase4:43611] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:11:12,819 INFO [RS:0;jenkins-hbase4:46039] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:11:12,819 INFO [RS:0;jenkins-hbase4:46039] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:11:12,819 INFO [RS:1;jenkins-hbase4:43611] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:11:12,819 DEBUG [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:11:12,819 DEBUG [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:11:12,819 DEBUG [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:11:12,820 DEBUG [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:11:12,820 DEBUG [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45411 2023-07-24 04:11:12,820 INFO [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40563,1690171872205 with isa=jenkins-hbase4.apache.org/172.31.14.131:43611, startcode=1690171872392 2023-07-24 04:11:12,820 INFO [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40563,1690171872205 with isa=jenkins-hbase4.apache.org/172.31.14.131:46039, startcode=1690171872278 2023-07-24 04:11:12,820 DEBUG [RS:1;jenkins-hbase4:43611] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:11:12,820 DEBUG [RS:0;jenkins-hbase4:46039] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:11:12,821 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52075, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:11:12,822 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60835, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:11:12,822 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40563] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:12,822 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:11:12,822 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 04:11:12,822 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40563] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:12,822 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:11:12,822 DEBUG [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:11:12,822 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 04:11:12,822 DEBUG [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:11:12,822 DEBUG [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:11:12,822 DEBUG [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45411 2023-07-24 04:11:12,822 DEBUG [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:11:12,822 DEBUG [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45411 2023-07-24 04:11:12,825 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:12,826 DEBUG [RS:2;jenkins-hbase4:43857] zookeeper.ZKUtil(162): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:12,827 WARN [RS:2;jenkins-hbase4:43857] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:11:12,827 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43857,1690171872444] 2023-07-24 04:11:12,827 INFO [RS:2;jenkins-hbase4:43857] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:12,827 DEBUG [RS:0;jenkins-hbase4:46039] zookeeper.ZKUtil(162): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:12,827 DEBUG [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:12,827 WARN [RS:0;jenkins-hbase4:46039] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:11:12,827 DEBUG [RS:1;jenkins-hbase4:43611] zookeeper.ZKUtil(162): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:12,827 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43611,1690171872392] 2023-07-24 04:11:12,827 WARN [RS:1;jenkins-hbase4:43611] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:11:12,827 INFO [RS:0;jenkins-hbase4:46039] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:12,827 INFO [RS:1;jenkins-hbase4:43611] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:12,827 DEBUG [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:12,827 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46039,1690171872278] 2023-07-24 04:11:12,827 DEBUG [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:12,834 DEBUG [RS:1;jenkins-hbase4:43611] zookeeper.ZKUtil(162): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:12,834 DEBUG [RS:2;jenkins-hbase4:43857] zookeeper.ZKUtil(162): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:12,834 DEBUG [RS:0;jenkins-hbase4:46039] zookeeper.ZKUtil(162): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:12,834 DEBUG [RS:1;jenkins-hbase4:43611] zookeeper.ZKUtil(162): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:12,835 DEBUG [RS:2;jenkins-hbase4:43857] zookeeper.ZKUtil(162): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:12,835 DEBUG [RS:0;jenkins-hbase4:46039] zookeeper.ZKUtil(162): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:12,835 DEBUG [RS:1;jenkins-hbase4:43611] zookeeper.ZKUtil(162): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:12,835 DEBUG [RS:0;jenkins-hbase4:46039] zookeeper.ZKUtil(162): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:12,835 DEBUG [RS:2;jenkins-hbase4:43857] zookeeper.ZKUtil(162): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:12,836 DEBUG [RS:1;jenkins-hbase4:43611] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:11:12,836 DEBUG [RS:0;jenkins-hbase4:46039] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:11:12,836 INFO [RS:1;jenkins-hbase4:43611] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:11:12,836 DEBUG [RS:2;jenkins-hbase4:43857] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:11:12,836 INFO [RS:0;jenkins-hbase4:46039] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:11:12,838 INFO [RS:1;jenkins-hbase4:43611] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:11:12,839 INFO [RS:1;jenkins-hbase4:43611] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:11:12,839 INFO [RS:1;jenkins-hbase4:43611] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,839 INFO [RS:2;jenkins-hbase4:43857] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:11:12,839 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:39717 this server is in the failed servers list 2023-07-24 04:11:12,842 INFO [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:11:12,843 INFO [RS:2;jenkins-hbase4:43857] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:11:12,844 INFO [RS:0;jenkins-hbase4:46039] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:11:12,850 INFO [RS:2;jenkins-hbase4:43857] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:11:12,850 INFO [RS:2;jenkins-hbase4:43857] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,850 INFO [RS:1;jenkins-hbase4:43611] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,850 INFO [RS:0;jenkins-hbase4:46039] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:11:12,850 INFO [RS:0;jenkins-hbase4:46039] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,850 DEBUG [RS:1;jenkins-hbase4:43611] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,850 INFO [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:11:12,851 DEBUG [RS:1;jenkins-hbase4:43611] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,851 INFO [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:11:12,851 DEBUG [RS:1;jenkins-hbase4:43611] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,851 DEBUG [RS:1;jenkins-hbase4:43611] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,852 DEBUG [RS:1;jenkins-hbase4:43611] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,852 DEBUG [RS:1;jenkins-hbase4:43611] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:11:12,852 DEBUG [RS:1;jenkins-hbase4:43611] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,852 DEBUG [RS:1;jenkins-hbase4:43611] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,852 DEBUG [RS:1;jenkins-hbase4:43611] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,852 INFO [RS:2;jenkins-hbase4:43857] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,852 DEBUG [RS:1;jenkins-hbase4:43611] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,852 INFO [RS:0;jenkins-hbase4:46039] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,852 DEBUG [RS:2;jenkins-hbase4:43857] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,852 DEBUG [RS:0;jenkins-hbase4:46039] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,852 DEBUG [RS:2;jenkins-hbase4:43857] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,852 DEBUG [RS:0;jenkins-hbase4:46039] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,852 DEBUG [RS:2;jenkins-hbase4:43857] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,852 DEBUG [RS:0;jenkins-hbase4:46039] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,853 DEBUG [RS:2;jenkins-hbase4:43857] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,853 DEBUG [RS:0;jenkins-hbase4:46039] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,853 DEBUG [RS:2;jenkins-hbase4:43857] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,853 DEBUG [RS:0;jenkins-hbase4:46039] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,853 DEBUG [RS:2;jenkins-hbase4:43857] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:11:12,853 DEBUG [RS:0;jenkins-hbase4:46039] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:11:12,853 DEBUG [RS:2;jenkins-hbase4:43857] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,853 DEBUG [RS:0;jenkins-hbase4:46039] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,853 DEBUG [RS:2;jenkins-hbase4:43857] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,853 DEBUG [RS:0;jenkins-hbase4:46039] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,853 DEBUG [RS:2;jenkins-hbase4:43857] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,853 DEBUG [RS:0;jenkins-hbase4:46039] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,853 DEBUG [RS:2;jenkins-hbase4:43857] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,853 DEBUG [RS:0;jenkins-hbase4:46039] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:12,854 INFO [RS:1;jenkins-hbase4:43611] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,855 INFO [RS:1;jenkins-hbase4:43611] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,855 INFO [RS:1;jenkins-hbase4:43611] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,855 INFO [RS:1;jenkins-hbase4:43611] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,858 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=101ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-24 04:11:12,862 INFO [RS:2;jenkins-hbase4:43857] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,862 INFO [RS:2;jenkins-hbase4:43857] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,862 INFO [RS:2;jenkins-hbase4:43857] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,862 INFO [RS:2;jenkins-hbase4:43857] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,864 INFO [RS:0;jenkins-hbase4:46039] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,864 INFO [RS:0;jenkins-hbase4:46039] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,865 INFO [RS:0;jenkins-hbase4:46039] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,865 INFO [RS:0;jenkins-hbase4:46039] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,876 INFO [RS:1;jenkins-hbase4:43611] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:11:12,876 INFO [RS:1;jenkins-hbase4:43611] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43611,1690171872392-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,878 INFO [RS:2;jenkins-hbase4:43857] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:11:12,879 INFO [RS:2;jenkins-hbase4:43857] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43857,1690171872444-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,879 INFO [RS:0;jenkins-hbase4:46039] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:11:12,879 INFO [RS:0;jenkins-hbase4:46039] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46039,1690171872278-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,891 INFO [RS:1;jenkins-hbase4:43611] regionserver.Replication(203): jenkins-hbase4.apache.org,43611,1690171872392 started 2023-07-24 04:11:12,891 INFO [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43611,1690171872392, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43611, sessionid=0x10195863d980012 2023-07-24 04:11:12,891 DEBUG [RS:1;jenkins-hbase4:43611] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:11:12,891 DEBUG [RS:1;jenkins-hbase4:43611] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:12,891 DEBUG [RS:1;jenkins-hbase4:43611] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43611,1690171872392' 2023-07-24 04:11:12,891 DEBUG [RS:1;jenkins-hbase4:43611] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:11:12,892 DEBUG [RS:1;jenkins-hbase4:43611] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:11:12,892 DEBUG [RS:1;jenkins-hbase4:43611] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:11:12,892 DEBUG [RS:1;jenkins-hbase4:43611] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:11:12,892 DEBUG [RS:1;jenkins-hbase4:43611] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:12,892 DEBUG [RS:1;jenkins-hbase4:43611] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43611,1690171872392' 2023-07-24 04:11:12,892 DEBUG [RS:1;jenkins-hbase4:43611] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:12,893 DEBUG [RS:1;jenkins-hbase4:43611] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:12,893 DEBUG [RS:1;jenkins-hbase4:43611] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:11:12,893 INFO [RS:1;jenkins-hbase4:43611] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 04:11:12,896 INFO [RS:1;jenkins-hbase4:43611] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,897 DEBUG [RS:1;jenkins-hbase4:43611] zookeeper.ZKUtil(398): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 04:11:12,897 INFO [RS:1;jenkins-hbase4:43611] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 04:11:12,897 INFO [RS:1;jenkins-hbase4:43611] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,897 INFO [RS:1;jenkins-hbase4:43611] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,898 INFO [RS:0;jenkins-hbase4:46039] regionserver.Replication(203): jenkins-hbase4.apache.org,46039,1690171872278 started 2023-07-24 04:11:12,899 INFO [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46039,1690171872278, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46039, sessionid=0x10195863d980011 2023-07-24 04:11:12,899 INFO [RS:2;jenkins-hbase4:43857] regionserver.Replication(203): jenkins-hbase4.apache.org,43857,1690171872444 started 2023-07-24 04:11:12,899 DEBUG [RS:0;jenkins-hbase4:46039] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:11:12,899 DEBUG [RS:0;jenkins-hbase4:46039] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:12,899 DEBUG [RS:0;jenkins-hbase4:46039] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46039,1690171872278' 2023-07-24 04:11:12,899 DEBUG [RS:0;jenkins-hbase4:46039] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:11:12,899 INFO [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43857,1690171872444, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43857, sessionid=0x10195863d980013 2023-07-24 04:11:12,899 DEBUG [RS:2;jenkins-hbase4:43857] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:11:12,899 DEBUG [RS:2;jenkins-hbase4:43857] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:12,899 DEBUG [RS:2;jenkins-hbase4:43857] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43857,1690171872444' 2023-07-24 04:11:12,899 DEBUG [RS:2;jenkins-hbase4:43857] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:11:12,899 DEBUG [RS:0;jenkins-hbase4:46039] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:11:12,907 DEBUG [RS:0;jenkins-hbase4:46039] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:11:12,907 DEBUG [RS:0;jenkins-hbase4:46039] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:11:12,907 DEBUG [RS:2;jenkins-hbase4:43857] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:11:12,907 DEBUG [RS:0;jenkins-hbase4:46039] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:12,907 DEBUG [RS:0;jenkins-hbase4:46039] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46039,1690171872278' 2023-07-24 04:11:12,907 DEBUG [RS:0;jenkins-hbase4:46039] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:12,907 DEBUG [RS:2;jenkins-hbase4:43857] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:11:12,907 DEBUG [RS:2;jenkins-hbase4:43857] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:11:12,907 DEBUG [RS:0;jenkins-hbase4:46039] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:12,907 DEBUG [RS:2;jenkins-hbase4:43857] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:12,907 DEBUG [RS:2;jenkins-hbase4:43857] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43857,1690171872444' 2023-07-24 04:11:12,907 DEBUG [RS:2;jenkins-hbase4:43857] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:12,908 DEBUG [RS:0;jenkins-hbase4:46039] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:11:12,908 INFO [RS:0;jenkins-hbase4:46039] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 04:11:12,908 INFO [RS:0;jenkins-hbase4:46039] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,908 DEBUG [RS:2;jenkins-hbase4:43857] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:12,908 DEBUG [RS:0;jenkins-hbase4:46039] zookeeper.ZKUtil(398): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 04:11:12,908 DEBUG [RS:2;jenkins-hbase4:43857] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:11:12,908 INFO [RS:2;jenkins-hbase4:43857] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 04:11:12,908 INFO [RS:0;jenkins-hbase4:46039] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 04:11:12,908 INFO [RS:2;jenkins-hbase4:43857] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,909 INFO [RS:0;jenkins-hbase4:46039] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,909 INFO [RS:0;jenkins-hbase4:46039] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,909 DEBUG [RS:2;jenkins-hbase4:43857] zookeeper.ZKUtil(398): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 04:11:12,909 INFO [RS:2;jenkins-hbase4:43857] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 04:11:12,909 INFO [RS:2;jenkins-hbase4:43857] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,909 INFO [RS:2;jenkins-hbase4:43857] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:12,935 DEBUG [jenkins-hbase4:40563] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 04:11:12,936 DEBUG [jenkins-hbase4:40563] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:11:12,936 DEBUG [jenkins-hbase4:40563] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:11:12,936 DEBUG [jenkins-hbase4:40563] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:11:12,936 DEBUG [jenkins-hbase4:40563] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:11:12,936 DEBUG [jenkins-hbase4:40563] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:11:12,938 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46039,1690171872278, state=OPENING 2023-07-24 04:11:12,940 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 04:11:12,940 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 04:11:12,940 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46039,1690171872278}] 2023-07-24 04:11:13,000 INFO [RS:1;jenkins-hbase4:43611] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43611%2C1690171872392, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43611,1690171872392, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:11:13,010 INFO [RS:0;jenkins-hbase4:46039] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46039%2C1690171872278, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46039,1690171872278, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:11:13,011 INFO [RS:2;jenkins-hbase4:43857] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43857%2C1690171872444, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43857,1690171872444, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:11:13,021 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:11:13,021 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:11:13,021 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:11:13,027 INFO [RS:1;jenkins-hbase4:43611] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43611,1690171872392/jenkins-hbase4.apache.org%2C43611%2C1690171872392.1690171873001 2023-07-24 04:11:13,028 DEBUG [RS:1;jenkins-hbase4:43611] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK]] 2023-07-24 04:11:13,037 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:11:13,038 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:11:13,038 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:11:13,041 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:11:13,041 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:11:13,041 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:11:13,043 WARN [ReadOnlyZKClient-127.0.0.1:59235@0x75853b23] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 04:11:13,043 INFO [RS:0;jenkins-hbase4:46039] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46039,1690171872278/jenkins-hbase4.apache.org%2C46039%2C1690171872278.1690171873012 2023-07-24 04:11:13,043 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:13,043 DEBUG [RS:0;jenkins-hbase4:46039] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK]] 2023-07-24 04:11:13,049 INFO [RS:2;jenkins-hbase4:43857] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43857,1690171872444/jenkins-hbase4.apache.org%2C43857%2C1690171872444.1690171873013 2023-07-24 04:11:13,049 DEBUG [RS:2;jenkins-hbase4:43857] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK]] 2023-07-24 04:11:13,049 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33020, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:11:13,049 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46039] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:33020 deadline: 1690171933049, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:13,093 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:13,094 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:11:13,096 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33034, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:11:13,100 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 04:11:13,100 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:13,102 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46039%2C1690171872278.meta, suffix=.meta, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46039,1690171872278, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:11:13,116 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:11:13,116 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:11:13,116 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:11:13,123 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46039,1690171872278/jenkins-hbase4.apache.org%2C46039%2C1690171872278.meta.1690171873102.meta 2023-07-24 04:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK]] 2023-07-24 04:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 04:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 04:11:13,123 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 04:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 04:11:13,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:13,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 04:11:13,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 04:11:13,127 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 04:11:13,128 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info 2023-07-24 04:11:13,128 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info 2023-07-24 04:11:13,128 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 04:11:13,138 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 35ed1307fbf449eea8d4667880d2c6b7 2023-07-24 04:11:13,138 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/35ed1307fbf449eea8d4667880d2c6b7 2023-07-24 04:11:13,144 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ea6a294b028040dcb802cfd24f5c7162 2023-07-24 04:11:13,144 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/ea6a294b028040dcb802cfd24f5c7162 2023-07-24 04:11:13,144 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:13,144 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 04:11:13,145 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier 2023-07-24 04:11:13,145 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier 2023-07-24 04:11:13,145 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 04:11:13,153 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 354c8876ee08418994b55326872ce722 2023-07-24 04:11:13,153 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier/354c8876ee08418994b55326872ce722 2023-07-24 04:11:13,159 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d38645e5353e4b70a81221f90b832aa9 2023-07-24 04:11:13,159 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier/d38645e5353e4b70a81221f90b832aa9 2023-07-24 04:11:13,159 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:13,159 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 04:11:13,160 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table 2023-07-24 04:11:13,160 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table 2023-07-24 04:11:13,161 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 04:11:13,168 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 763c92d71bbe40558f4f7141fc340072 2023-07-24 04:11:13,168 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/763c92d71bbe40558f4f7141fc340072 2023-07-24 04:11:13,174 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e673da21eba54a61b6fc1007d80762bf 2023-07-24 04:11:13,174 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/e673da21eba54a61b6fc1007d80762bf 2023-07-24 04:11:13,174 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:13,175 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740 2023-07-24 04:11:13,176 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740 2023-07-24 04:11:13,178 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 04:11:13,179 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 04:11:13,180 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=144; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10382918400, jitterRate=-0.03301537036895752}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 04:11:13,180 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 04:11:13,181 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=114, masterSystemTime=1690171873093 2023-07-24 04:11:13,185 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 04:11:13,186 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 04:11:13,186 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46039,1690171872278, state=OPEN 2023-07-24 04:11:13,188 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 04:11:13,189 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 04:11:13,190 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-24 04:11:13,191 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46039,1690171872278 in 249 msec 2023-07-24 04:11:13,192 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=110 2023-07-24 04:11:13,192 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 415 msec 2023-07-24 04:11:13,379 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:13,380 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:41157 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41157 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:11:13,382 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:41157 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41157 2023-07-24 04:11:13,487 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:41157 this server is in the failed servers list 2023-07-24 04:11:13,692 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:41157 this server is in the failed servers list 2023-07-24 04:11:13,997 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:41157 this server is in the failed servers list 2023-07-24 04:11:14,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1604ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1503ms 2023-07-24 04:11:14,502 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:41157 this server is in the failed servers list 2023-07-24 04:11:15,510 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:41157 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41157 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:11:15,512 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:41157 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41157 2023-07-24 04:11:15,863 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3106ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3005ms 2023-07-24 04:11:17,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4509ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-24 04:11:17,266 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 04:11:17,273 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=73e1052e9bc949a33667944e6caa42b4, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,41157,1690171852333, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333, openSeqNum=2 2023-07-24 04:11:17,273 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=6aa1ab126d58dcf7d835257119c9304f, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,41157,1690171852333, regionLocation=jenkins-hbase4.apache.org,41157,1690171852333, openSeqNum=2 2023-07-24 04:11:17,273 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 04:11:17,273 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690171937273 2023-07-24 04:11:17,273 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690171997273 2023-07-24 04:11:17,273 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-07-24 04:11:17,292 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,43785,1690171856375 had 0 regions 2023-07-24 04:11:17,293 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,37679,1690171852273 had 0 regions 2023-07-24 04:11:17,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40563,1690171872205-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:17,292 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,39717,1690171855814 had 1 regions 2023-07-24 04:11:17,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40563,1690171872205-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:17,293 INFO [PEWorker-1] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,41157,1690171852333 had 2 regions 2023-07-24 04:11:17,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40563,1690171872205-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:17,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40563, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:17,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:17,294 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. is NOT online; state={73e1052e9bc949a33667944e6caa42b4 state=OPEN, ts=1690171877273, server=jenkins-hbase4.apache.org,41157,1690171852333}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-24 04:11:17,295 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=110, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,39717,1690171855814, splitWal=true, meta=true, isMeta: false 2023-07-24 04:11:17,295 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=109, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,41157,1690171852333, splitWal=true, meta=false, isMeta: false 2023-07-24 04:11:17,294 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=111, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,43785,1690171856375, splitWal=true, meta=false, isMeta: false 2023-07-24 04:11:17,295 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=112, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,37679,1690171852273, splitWal=true, meta=false, isMeta: false 2023-07-24 04:11:17,301 WARN [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase4.apache.org,41157,1690171852333/hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4., unknown_server=jenkins-hbase4.apache.org,41157,1690171852333/hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:17,302 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,39717,1690171855814-splitting dir is empty, no logs to split. 2023-07-24 04:11:17,302 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,41157,1690171852333-splitting 2023-07-24 04:11:17,302 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,39717,1690171855814 WAL count=0, meta=false 2023-07-24 04:11:17,303 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43785,1690171856375-splitting 2023-07-24 04:11:17,303 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,41157,1690171852333-splitting dir is empty, no logs to split. 2023-07-24 04:11:17,303 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,41157,1690171852333 WAL count=0, meta=false 2023-07-24 04:11:17,304 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43785,1690171856375-splitting dir is empty, no logs to split. 2023-07-24 04:11:17,304 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,43785,1690171856375 WAL count=0, meta=false 2023-07-24 04:11:17,306 DEBUG [PEWorker-2] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,37679,1690171852273-splitting 2023-07-24 04:11:17,308 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,39717,1690171855814-splitting dir is empty, no logs to split. 2023-07-24 04:11:17,308 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,39717,1690171855814 WAL count=0, meta=false 2023-07-24 04:11:17,308 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,39717,1690171855814 WAL splitting is done? wals=0, meta=false 2023-07-24 04:11:17,309 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,37679,1690171852273-splitting dir is empty, no logs to split. 2023-07-24 04:11:17,309 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,37679,1690171852273 WAL count=0, meta=false 2023-07-24 04:11:17,311 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,41157,1690171852333-splitting dir is empty, no logs to split. 2023-07-24 04:11:17,311 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,41157,1690171852333 WAL count=0, meta=false 2023-07-24 04:11:17,311 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,41157,1690171852333 WAL splitting is done? wals=0, meta=false 2023-07-24 04:11:17,314 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,37679,1690171852273-splitting dir is empty, no logs to split. 2023-07-24 04:11:17,314 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,37679,1690171852273 WAL count=0, meta=false 2023-07-24 04:11:17,314 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,37679,1690171852273 WAL splitting is done? wals=0, meta=false 2023-07-24 04:11:17,316 INFO [PEWorker-1] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,41157,1690171852333 failed, ignore...File hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,41157,1690171852333-splitting does not exist. 2023-07-24 04:11:17,317 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,39717,1690171855814 after splitting done 2023-07-24 04:11:17,317 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase4.apache.org,39717,1690171855814 from processing; numProcessing=3 2023-07-24 04:11:17,317 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=6aa1ab126d58dcf7d835257119c9304f, ASSIGN}, {pid=116, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, ASSIGN}] 2023-07-24 04:11:17,320 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43785,1690171856375-splitting dir is empty, no logs to split. 2023-07-24 04:11:17,320 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,43785,1690171856375 WAL count=0, meta=false 2023-07-24 04:11:17,320 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,43785,1690171856375 WAL splitting is done? wals=0, meta=false 2023-07-24 04:11:17,320 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=116, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, ASSIGN 2023-07-24 04:11:17,321 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=6aa1ab126d58dcf7d835257119c9304f, ASSIGN 2023-07-24 04:11:17,322 INFO [PEWorker-2] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,37679,1690171852273 failed, ignore...File hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,37679,1690171852273-splitting does not exist. 2023-07-24 04:11:17,323 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,39717,1690171855814, splitWal=true, meta=true in 4.6250 sec 2023-07-24 04:11:17,324 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=6aa1ab126d58dcf7d835257119c9304f, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 04:11:17,324 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=116, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 04:11:17,324 DEBUG [jenkins-hbase4:40563] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 04:11:17,324 DEBUG [jenkins-hbase4:40563] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:11:17,324 DEBUG [jenkins-hbase4:40563] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:11:17,324 DEBUG [jenkins-hbase4:40563] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:11:17,325 DEBUG [jenkins-hbase4:40563] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:11:17,325 DEBUG [jenkins-hbase4:40563] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-24 04:11:17,331 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,37679,1690171852273 after splitting done 2023-07-24 04:11:17,331 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=116 updating hbase:meta row=73e1052e9bc949a33667944e6caa42b4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:17,331 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase4.apache.org,37679,1690171852273 from processing; numProcessing=2 2023-07-24 04:11:17,331 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690171877331"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171877331"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171877331"}]},"ts":"1690171877331"} 2023-07-24 04:11:17,332 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=6aa1ab126d58dcf7d835257119c9304f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:17,332 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690171877332"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171877332"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171877332"}]},"ts":"1690171877332"} 2023-07-24 04:11:17,334 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,43785,1690171856375 failed, ignore...File hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43785,1690171856375-splitting does not exist. 2023-07-24 04:11:17,335 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=117, ppid=116, state=RUNNABLE; OpenRegionProcedure 73e1052e9bc949a33667944e6caa42b4, server=jenkins-hbase4.apache.org,46039,1690171872278}] 2023-07-24 04:11:17,335 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,37679,1690171852273, splitWal=true, meta=false in 4.6340 sec 2023-07-24 04:11:17,335 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=115, state=RUNNABLE; OpenRegionProcedure 6aa1ab126d58dcf7d835257119c9304f, server=jenkins-hbase4.apache.org,43611,1690171872392}] 2023-07-24 04:11:17,336 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,43785,1690171856375 after splitting done 2023-07-24 04:11:17,336 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase4.apache.org,43785,1690171856375 from processing; numProcessing=1 2023-07-24 04:11:17,338 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,43785,1690171856375, splitWal=true, meta=false in 4.6410 sec 2023-07-24 04:11:17,489 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:17,489 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:11:17,491 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35922, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:11:17,492 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:17,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 73e1052e9bc949a33667944e6caa42b4, NAME => 'hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:17,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:17,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:17,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:17,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:17,494 INFO [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:17,495 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:17,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6aa1ab126d58dcf7d835257119c9304f, NAME => 'hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:17,495 DEBUG [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info 2023-07-24 04:11:17,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 04:11:17,495 DEBUG [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info 2023-07-24 04:11:17,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. service=MultiRowMutationService 2023-07-24 04:11:17,495 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 04:11:17,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:17,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:17,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:17,496 INFO [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 73e1052e9bc949a33667944e6caa42b4 columnFamilyName info 2023-07-24 04:11:17,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:17,500 INFO [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:17,501 DEBUG [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m 2023-07-24 04:11:17,501 DEBUG [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m 2023-07-24 04:11:17,502 INFO [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6aa1ab126d58dcf7d835257119c9304f columnFamilyName m 2023-07-24 04:11:17,507 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6c036af6e624012bca44b5797bc2af2 2023-07-24 04:11:17,507 DEBUG [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info/c6c036af6e624012bca44b5797bc2af2 2023-07-24 04:11:17,507 INFO [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] regionserver.HStore(310): Store=73e1052e9bc949a33667944e6caa42b4/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:17,510 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:17,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:17,513 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef83161486a0423dbe81bde050027796 2023-07-24 04:11:17,513 DEBUG [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m/ef83161486a0423dbe81bde050027796 2023-07-24 04:11:17,513 INFO [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] regionserver.HStore(310): Store=6aa1ab126d58dcf7d835257119c9304f/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:17,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:17,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:17,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:17,516 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 73e1052e9bc949a33667944e6caa42b4; next sequenceid=15; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11853994720, jitterRate=0.10398928821086884}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:11:17,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 73e1052e9bc949a33667944e6caa42b4: 2023-07-24 04:11:17,517 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4., pid=117, masterSystemTime=1690171877486 2023-07-24 04:11:17,519 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:17,519 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:17,519 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:17,520 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=116 updating hbase:meta row=73e1052e9bc949a33667944e6caa42b4, regionState=OPEN, openSeqNum=15, regionLocation=jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:17,520 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690171877520"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171877520"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171877520"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171877520"}]},"ts":"1690171877520"} 2023-07-24 04:11:17,520 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6aa1ab126d58dcf7d835257119c9304f; next sequenceid=71; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@691b6260, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:11:17,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6aa1ab126d58dcf7d835257119c9304f: 2023-07-24 04:11:17,521 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f., pid=118, masterSystemTime=1690171877489 2023-07-24 04:11:17,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:17,525 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:17,526 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=6aa1ab126d58dcf7d835257119c9304f, regionState=OPEN, openSeqNum=71, regionLocation=jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:17,526 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690171877526"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171877526"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171877526"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171877526"}]},"ts":"1690171877526"} 2023-07-24 04:11:17,529 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=117, resume processing ppid=116 2023-07-24 04:11:17,529 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, ppid=116, state=SUCCESS; OpenRegionProcedure 73e1052e9bc949a33667944e6caa42b4, server=jenkins-hbase4.apache.org,46039,1690171872278 in 189 msec 2023-07-24 04:11:17,529 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:41157 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41157 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:11:17,531 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=46, started=4168 ms ago, cancelled=false, msg=Call to address=jenkins-hbase4.apache.org/172.31.14.131:41157 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41157, details=row '\x00' on table 'hbase:rsgroup' at region=hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f., hostname=jenkins-hbase4.apache.org,41157,1690171852333, seqNum=2, see https://s.apache.org/timeout, exception=java.net.ConnectException: Call to address=jenkins-hbase4.apache.org/172.31.14.131:41157 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41157 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:186) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:99) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:398) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:368) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:396) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:195) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$300(NettyRpcConnection.java:76) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:296) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:287) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:674) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:693) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41157 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:11:17,531 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:41157 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41157 2023-07-24 04:11:17,534 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, ASSIGN in 212 msec 2023-07-24 04:11:17,534 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=115 2023-07-24 04:11:17,534 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=115, state=SUCCESS; OpenRegionProcedure 6aa1ab126d58dcf7d835257119c9304f, server=jenkins-hbase4.apache.org,43611,1690171872392 in 193 msec 2023-07-24 04:11:17,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=109 2023-07-24 04:11:17,539 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,41157,1690171852333 after splitting done 2023-07-24 04:11:17,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=6aa1ab126d58dcf7d835257119c9304f, ASSIGN in 217 msec 2023-07-24 04:11:17,539 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase4.apache.org,41157,1690171852333 from processing; numProcessing=0 2023-07-24 04:11:17,541 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,41157,1690171852333, splitWal=true, meta=false in 4.8520 sec 2023-07-24 04:11:18,296 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-24 04:11:18,311 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 04:11:18,314 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 04:11:18,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.809sec 2023-07-24 04:11:18,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-24 04:11:18,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 04:11:18,317 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=119, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-24 04:11:18,317 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-24 04:11:18,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-24 04:11:18,321 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=119, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 04:11:18,322 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=119, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 04:11:18,323 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:18,324 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299 empty. 2023-07-24 04:11:18,324 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:18,325 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-24 04:11:18,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-24 04:11:18,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-24 04:11:18,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:18,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:18,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 04:11:18,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 04:11:18,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40563,1690171872205-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 04:11:18,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40563,1690171872205-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 04:11:18,330 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 04:11:18,342 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-24 04:11:18,343 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6d1026a50e3a812feaa5fb2336097299, NAME => 'hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.tmp 2023-07-24 04:11:18,358 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:18,358 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 6d1026a50e3a812feaa5fb2336097299, disabling compactions & flushes 2023-07-24 04:11:18,358 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:18,358 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:18,358 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. after waiting 0 ms 2023-07-24 04:11:18,358 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:18,358 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:18,358 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 6d1026a50e3a812feaa5fb2336097299: 2023-07-24 04:11:18,361 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=119, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 04:11:18,362 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690171878362"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171878362"}]},"ts":"1690171878362"} 2023-07-24 04:11:18,363 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 04:11:18,364 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=119, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 04:11:18,364 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171878364"}]},"ts":"1690171878364"} 2023-07-24 04:11:18,366 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-24 04:11:18,369 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:11:18,369 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:11:18,369 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:11:18,369 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:11:18,369 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:11:18,372 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=120, ppid=119, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=6d1026a50e3a812feaa5fb2336097299, ASSIGN}] 2023-07-24 04:11:18,374 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, ppid=119, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=6d1026a50e3a812feaa5fb2336097299, ASSIGN 2023-07-24 04:11:18,375 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=120, ppid=119, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=6d1026a50e3a812feaa5fb2336097299, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43857,1690171872444; forceNewPlan=false, retain=false 2023-07-24 04:11:18,399 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(139): Connect 0x0faf8786 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:18,406 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49fc4eb5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:18,408 DEBUG [hconnection-0x2131cba7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:18,410 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33044, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:11:18,418 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-24 04:11:18,419 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0faf8786 to 127.0.0.1:59235 2023-07-24 04:11:18,419 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:18,420 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase4.apache.org:40563 after: jenkins-hbase4.apache.org:40563 2023-07-24 04:11:18,420 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(139): Connect 0x231c30fe to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:18,437 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@c5fabc9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:18,437 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:18,525 INFO [jenkins-hbase4:40563] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 04:11:18,526 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=6d1026a50e3a812feaa5fb2336097299, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:18,526 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690171878526"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171878526"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171878526"}]},"ts":"1690171878526"} 2023-07-24 04:11:18,528 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; OpenRegionProcedure 6d1026a50e3a812feaa5fb2336097299, server=jenkins-hbase4.apache.org,43857,1690171872444}] 2023-07-24 04:11:18,632 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 04:11:18,681 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:18,681 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:11:18,683 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34842, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:11:18,686 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:18,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6d1026a50e3a812feaa5fb2336097299, NAME => 'hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:18,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:18,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:18,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:18,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:18,688 INFO [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:18,690 DEBUG [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299/q 2023-07-24 04:11:18,690 DEBUG [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299/q 2023-07-24 04:11:18,690 INFO [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6d1026a50e3a812feaa5fb2336097299 columnFamilyName q 2023-07-24 04:11:18,691 INFO [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] regionserver.HStore(310): Store=6d1026a50e3a812feaa5fb2336097299/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:18,691 INFO [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:18,692 DEBUG [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299/u 2023-07-24 04:11:18,692 DEBUG [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299/u 2023-07-24 04:11:18,693 INFO [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6d1026a50e3a812feaa5fb2336097299 columnFamilyName u 2023-07-24 04:11:18,693 INFO [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] regionserver.HStore(310): Store=6d1026a50e3a812feaa5fb2336097299/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:18,694 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:18,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:18,696 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-24 04:11:18,698 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:18,703 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 04:11:18,703 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6d1026a50e3a812feaa5fb2336097299; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9633920960, jitterRate=-0.10277119278907776}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-24 04:11:18,703 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6d1026a50e3a812feaa5fb2336097299: 2023-07-24 04:11:18,704 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299., pid=121, masterSystemTime=1690171878681 2023-07-24 04:11:18,711 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:18,712 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:18,715 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=6d1026a50e3a812feaa5fb2336097299, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:18,715 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690171878714"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171878714"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171878714"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171878714"}]},"ts":"1690171878714"} 2023-07-24 04:11:18,719 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-24 04:11:18,719 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; OpenRegionProcedure 6d1026a50e3a812feaa5fb2336097299, server=jenkins-hbase4.apache.org,43857,1690171872444 in 189 msec 2023-07-24 04:11:18,723 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=120, resume processing ppid=119 2023-07-24 04:11:18,724 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=120, ppid=119, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=6d1026a50e3a812feaa5fb2336097299, ASSIGN in 350 msec 2023-07-24 04:11:18,724 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=119, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 04:11:18,725 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690171878725"}]},"ts":"1690171878725"} 2023-07-24 04:11:18,726 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-24 04:11:18,729 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=119, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 04:11:18,741 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=119, state=SUCCESS; CreateTableProcedure table=hbase:quota in 414 msec 2023-07-24 04:11:18,836 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 04:11:18,837 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 04:11:18,838 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 04:11:18,839 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-24 04:11:20,451 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 04:11:20,451 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-24 04:11:20,452 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-24 04:11:20,452 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-07-24 04:11:20,452 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 04:11:20,452 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-24 04:11:20,452 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 04:11:20,452 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-24 04:11:21,547 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:21,548 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56096, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:11:21,549 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 04:11:21,549 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 04:11:21,557 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:21,557 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:21,558 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:21,559 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-24 04:11:21,559 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40563,1690171872205] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 04:11:21,642 DEBUG [Listener at localhost/41307] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 04:11:21,644 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40146, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 04:11:21,647 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-24 04:11:21,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40563] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 04:11:21,648 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(139): Connect 0x4d3eda28 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:21,653 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78df115e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:21,653 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:11:21,655 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [90,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:21,656 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:11:21,656 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10195863d98001b connected 2023-07-24 04:11:21,657 DEBUG [Listener at localhost/41307] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:21,659 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45230, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:11:21,667 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-24 04:11:21,667 INFO [Listener at localhost/41307] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 04:11:21,667 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x231c30fe to 127.0.0.1:59235 2023-07-24 04:11:21,667 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:21,667 DEBUG [Listener at localhost/41307] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 04:11:21,667 DEBUG [Listener at localhost/41307] util.JVMClusterUtil(257): Found active master hash=1489932867, stopped=false 2023-07-24 04:11:21,667 DEBUG [Listener at localhost/41307] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 04:11:21,667 DEBUG [Listener at localhost/41307] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 04:11:21,667 DEBUG [Listener at localhost/41307] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-24 04:11:21,668 INFO [Listener at localhost/41307] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:21,670 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:21,670 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:21,670 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:11:21,670 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:21,670 INFO [Listener at localhost/41307] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 04:11:21,670 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:21,670 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:21,671 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x75853b23 to 127.0.0.1:59235 2023-07-24 04:11:21,671 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:21,671 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:21,671 INFO [Listener at localhost/41307] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46039,1690171872278' ***** 2023-07-24 04:11:21,671 INFO [Listener at localhost/41307] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 04:11:21,672 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:21,672 INFO [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:21,672 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:21,672 INFO [Listener at localhost/41307] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43611,1690171872392' ***** 2023-07-24 04:11:21,672 INFO [Listener at localhost/41307] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 04:11:21,675 INFO [Listener at localhost/41307] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43857,1690171872444' ***** 2023-07-24 04:11:21,675 INFO [Listener at localhost/41307] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 04:11:21,675 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:21,675 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:11:21,675 INFO [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:21,675 INFO [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:21,684 INFO [RS:0;jenkins-hbase4:46039] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@53d03153{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:21,686 INFO [RS:2;jenkins-hbase4:43857] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2d4eff7e{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:21,686 INFO [RS:0;jenkins-hbase4:46039] server.AbstractConnector(383): Stopped ServerConnector@827b832{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:21,686 INFO [RS:1;jenkins-hbase4:43611] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7a33fea8{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:21,686 INFO [RS:0;jenkins-hbase4:46039] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:21,686 INFO [RS:2;jenkins-hbase4:43857] server.AbstractConnector(383): Stopped ServerConnector@73212ba8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:21,686 INFO [RS:0;jenkins-hbase4:46039] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4622d84c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:21,686 INFO [RS:1;jenkins-hbase4:43611] server.AbstractConnector(383): Stopped ServerConnector@7125e0b4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:21,687 INFO [RS:0;jenkins-hbase4:46039] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5421514{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:21,686 INFO [RS:2;jenkins-hbase4:43857] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:21,687 INFO [RS:1;jenkins-hbase4:43611] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:21,687 INFO [RS:2;jenkins-hbase4:43857] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@49da996{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:21,687 INFO [RS:1;jenkins-hbase4:43611] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@42785be3{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:21,687 INFO [RS:2;jenkins-hbase4:43857] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4ffd85df{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:21,687 INFO [RS:1;jenkins-hbase4:43611] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@176acb35{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:21,687 INFO [RS:0;jenkins-hbase4:46039] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:11:21,688 INFO [RS:0;jenkins-hbase4:46039] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:11:21,688 INFO [RS:1;jenkins-hbase4:43611] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:11:21,688 INFO [RS:0;jenkins-hbase4:46039] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:11:21,688 INFO [RS:1;jenkins-hbase4:43611] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:11:21,688 INFO [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(3305): Received CLOSE for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:21,688 INFO [RS:1;jenkins-hbase4:43611] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:11:21,688 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:11:21,688 INFO [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(3305): Received CLOSE for 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:21,688 INFO [RS:2;jenkins-hbase4:43857] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:11:21,688 INFO [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:21,688 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:11:21,689 DEBUG [RS:0;jenkins-hbase4:46039] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0dbea709 to 127.0.0.1:59235 2023-07-24 04:11:21,689 INFO [RS:2;jenkins-hbase4:43857] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:11:21,689 INFO [RS:2;jenkins-hbase4:43857] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:11:21,689 INFO [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:21,689 INFO [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(3305): Received CLOSE for 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:21,689 DEBUG [RS:0;jenkins-hbase4:46039] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:21,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6aa1ab126d58dcf7d835257119c9304f, disabling compactions & flushes 2023-07-24 04:11:21,690 INFO [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:21,690 INFO [RS:0;jenkins-hbase4:46039] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:11:21,690 DEBUG [RS:2;jenkins-hbase4:43857] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x39c18581 to 127.0.0.1:59235 2023-07-24 04:11:21,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 73e1052e9bc949a33667944e6caa42b4, disabling compactions & flushes 2023-07-24 04:11:21,690 DEBUG [RS:2;jenkins-hbase4:43857] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:21,689 DEBUG [RS:1;jenkins-hbase4:43611] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5b7c88f8 to 127.0.0.1:59235 2023-07-24 04:11:21,690 INFO [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 04:11:21,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:21,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:21,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. after waiting 0 ms 2023-07-24 04:11:21,690 INFO [RS:0;jenkins-hbase4:46039] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:11:21,691 INFO [RS:0;jenkins-hbase4:46039] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:11:21,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6d1026a50e3a812feaa5fb2336097299, disabling compactions & flushes 2023-07-24 04:11:21,691 INFO [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 04:11:21,691 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:21,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:21,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:21,691 INFO [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-24 04:11:21,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. after waiting 0 ms 2023-07-24 04:11:21,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:21,691 DEBUG [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1478): Online Regions={6d1026a50e3a812feaa5fb2336097299=hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299.} 2023-07-24 04:11:21,691 DEBUG [RS:1;jenkins-hbase4:43611] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:21,692 INFO [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 04:11:21,692 DEBUG [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1478): Online Regions={6aa1ab126d58dcf7d835257119c9304f=hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.} 2023-07-24 04:11:21,692 DEBUG [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1504): Waiting on 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:21,692 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 04:11:21,692 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 04:11:21,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:21,691 DEBUG [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 73e1052e9bc949a33667944e6caa42b4=hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.} 2023-07-24 04:11:21,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:21,693 DEBUG [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1504): Waiting on 1588230740, 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:21,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. after waiting 0 ms 2023-07-24 04:11:21,692 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 04:11:21,692 DEBUG [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1504): Waiting on 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:21,693 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 04:11:21,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:21,693 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 04:11:21,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 6aa1ab126d58dcf7d835257119c9304f 1/1 column families, dataSize=242 B heapSize=648 B 2023-07-24 04:11:21,693 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.05 KB heapSize=5.87 KB 2023-07-24 04:11:21,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 04:11:21,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=14 2023-07-24 04:11:21,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:21,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6d1026a50e3a812feaa5fb2336097299: 2023-07-24 04:11:21,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:21,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:21,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 73e1052e9bc949a33667944e6caa42b4: 2023-07-24 04:11:21,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:21,713 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.97 KB at sequenceid=155 (bloomFilter=false), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/info/de3310ea376f4898b8ea51fb19fe2f72 2023-07-24 04:11:21,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=242 B at sequenceid=74 (bloomFilter=true), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/.tmp/m/38da92e746f04a2aa60a5af2d1328ab4 2023-07-24 04:11:21,724 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/.tmp/m/38da92e746f04a2aa60a5af2d1328ab4 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m/38da92e746f04a2aa60a5af2d1328ab4 2023-07-24 04:11:21,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m/38da92e746f04a2aa60a5af2d1328ab4, entries=2, sequenceid=74, filesize=5.0 K 2023-07-24 04:11:21,735 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=86 B at sequenceid=155 (bloomFilter=false), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/table/d0bd780de0244b1c96eaeb749f1d60a0 2023-07-24 04:11:21,736 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~242 B/242, heapSize ~632 B/632, currentSize=0 B/0 for 6aa1ab126d58dcf7d835257119c9304f in 43ms, sequenceid=74, compaction requested=false 2023-07-24 04:11:21,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/recovered.edits/77.seqid, newMaxSeqId=77, maxSeqId=70 2023-07-24 04:11:21,744 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 04:11:21,745 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:21,745 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/info/de3310ea376f4898b8ea51fb19fe2f72 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/de3310ea376f4898b8ea51fb19fe2f72 2023-07-24 04:11:21,745 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6aa1ab126d58dcf7d835257119c9304f: 2023-07-24 04:11:21,745 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:21,750 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/de3310ea376f4898b8ea51fb19fe2f72, entries=26, sequenceid=155, filesize=7.7 K 2023-07-24 04:11:21,751 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/table/d0bd780de0244b1c96eaeb749f1d60a0 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/d0bd780de0244b1c96eaeb749f1d60a0 2023-07-24 04:11:21,756 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/d0bd780de0244b1c96eaeb749f1d60a0, entries=2, sequenceid=155, filesize=4.7 K 2023-07-24 04:11:21,757 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.05 KB/3126, heapSize ~5.59 KB/5720, currentSize=0 B/0 for 1588230740 in 64ms, sequenceid=155, compaction requested=true 2023-07-24 04:11:21,767 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:21,773 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/recovered.edits/158.seqid, newMaxSeqId=158, maxSeqId=143 2023-07-24 04:11:21,773 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 04:11:21,774 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 04:11:21,774 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 04:11:21,774 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 04:11:21,775 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:21,859 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 04:11:21,859 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 04:11:21,892 INFO [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43611,1690171872392; all regions closed. 2023-07-24 04:11:21,892 DEBUG [RS:1;jenkins-hbase4:43611] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 04:11:21,893 INFO [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46039,1690171872278; all regions closed. 2023-07-24 04:11:21,893 DEBUG [RS:0;jenkins-hbase4:46039] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 04:11:21,893 INFO [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43857,1690171872444; all regions closed. 2023-07-24 04:11:21,893 DEBUG [RS:2;jenkins-hbase4:43857] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 04:11:21,905 DEBUG [RS:0;jenkins-hbase4:46039] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:21,905 INFO [RS:0;jenkins-hbase4:46039] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46039%2C1690171872278.meta:.meta(num 1690171873102) 2023-07-24 04:11:21,905 DEBUG [RS:1;jenkins-hbase4:43611] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:21,905 INFO [RS:1;jenkins-hbase4:43611] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43611%2C1690171872392:(num 1690171873001) 2023-07-24 04:11:21,905 DEBUG [RS:1;jenkins-hbase4:43611] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:21,905 INFO [RS:1;jenkins-hbase4:43611] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:21,907 DEBUG [RS:2;jenkins-hbase4:43857] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:21,907 INFO [RS:2;jenkins-hbase4:43857] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43857%2C1690171872444:(num 1690171873013) 2023-07-24 04:11:21,907 DEBUG [RS:2;jenkins-hbase4:43857] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:21,907 INFO [RS:2;jenkins-hbase4:43857] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:21,908 INFO [RS:1;jenkins-hbase4:43611] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:21,908 INFO [RS:2;jenkins-hbase4:43857] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:21,908 INFO [RS:1;jenkins-hbase4:43611] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:11:21,908 INFO [RS:2;jenkins-hbase4:43857] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:11:21,908 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:21,908 INFO [RS:1;jenkins-hbase4:43611] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:11:21,908 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:21,908 INFO [RS:1;jenkins-hbase4:43611] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:11:21,908 INFO [RS:2;jenkins-hbase4:43857] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:11:21,908 INFO [RS:2;jenkins-hbase4:43857] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:11:21,909 INFO [RS:1;jenkins-hbase4:43611] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43611 2023-07-24 04:11:21,909 INFO [RS:2;jenkins-hbase4:43857] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43857 2023-07-24 04:11:21,912 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:21,912 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:21,912 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:21,912 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43857,1690171872444 2023-07-24 04:11:21,912 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:21,912 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:21,912 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:21,914 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:21,914 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:21,914 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43611,1690171872392 2023-07-24 04:11:21,914 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43857,1690171872444] 2023-07-24 04:11:21,914 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43857,1690171872444; numProcessing=1 2023-07-24 04:11:21,917 DEBUG [RS:0;jenkins-hbase4:46039] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:21,917 INFO [RS:0;jenkins-hbase4:46039] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46039%2C1690171872278:(num 1690171873012) 2023-07-24 04:11:21,917 DEBUG [RS:0;jenkins-hbase4:46039] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:21,917 INFO [RS:0;jenkins-hbase4:46039] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:21,917 INFO [RS:0;jenkins-hbase4:46039] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:21,918 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:21,918 INFO [RS:0;jenkins-hbase4:46039] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46039 2023-07-24 04:11:21,921 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43857,1690171872444 already deleted, retry=false 2023-07-24 04:11:21,921 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43857,1690171872444 expired; onlineServers=2 2023-07-24 04:11:21,921 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43611,1690171872392] 2023-07-24 04:11:21,921 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43611,1690171872392; numProcessing=2 2023-07-24 04:11:21,922 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46039,1690171872278 2023-07-24 04:11:21,922 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:21,923 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43611,1690171872392 already deleted, retry=false 2023-07-24 04:11:21,923 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43611,1690171872392 expired; onlineServers=1 2023-07-24 04:11:21,924 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46039,1690171872278] 2023-07-24 04:11:21,924 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46039,1690171872278; numProcessing=3 2023-07-24 04:11:21,926 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46039,1690171872278 already deleted, retry=false 2023-07-24 04:11:21,926 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46039,1690171872278 expired; onlineServers=0 2023-07-24 04:11:21,926 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40563,1690171872205' ***** 2023-07-24 04:11:21,926 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 04:11:21,927 DEBUG [M:0;jenkins-hbase4:40563] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5cf10d82, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:11:21,927 INFO [M:0;jenkins-hbase4:40563] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:21,928 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 04:11:21,928 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:11:21,928 INFO [M:0;jenkins-hbase4:40563] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4c0e136e{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 04:11:21,928 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:11:21,929 INFO [M:0;jenkins-hbase4:40563] server.AbstractConnector(383): Stopped ServerConnector@1f7713a7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:21,929 INFO [M:0;jenkins-hbase4:40563] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:21,929 INFO [M:0;jenkins-hbase4:40563] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3651d084{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:21,929 INFO [M:0;jenkins-hbase4:40563] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7fea07d1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:21,929 INFO [M:0;jenkins-hbase4:40563] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40563,1690171872205 2023-07-24 04:11:21,930 INFO [M:0;jenkins-hbase4:40563] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40563,1690171872205; all regions closed. 2023-07-24 04:11:21,930 DEBUG [M:0;jenkins-hbase4:40563] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:21,930 INFO [M:0;jenkins-hbase4:40563] master.HMaster(1491): Stopping master jetty server 2023-07-24 04:11:21,930 INFO [M:0;jenkins-hbase4:40563] server.AbstractConnector(383): Stopped ServerConnector@59439992{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:21,931 DEBUG [M:0;jenkins-hbase4:40563] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 04:11:21,931 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 04:11:21,931 DEBUG [M:0;jenkins-hbase4:40563] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 04:11:21,931 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690171872757] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690171872757,5,FailOnTimeoutGroup] 2023-07-24 04:11:21,931 INFO [M:0;jenkins-hbase4:40563] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 04:11:21,931 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690171872757] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690171872757,5,FailOnTimeoutGroup] 2023-07-24 04:11:21,931 INFO [M:0;jenkins-hbase4:40563] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 04:11:21,932 INFO [M:0;jenkins-hbase4:40563] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:21,932 DEBUG [M:0;jenkins-hbase4:40563] master.HMaster(1512): Stopping service threads 2023-07-24 04:11:21,932 INFO [M:0;jenkins-hbase4:40563] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 04:11:21,933 ERROR [M:0;jenkins-hbase4:40563] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 04:11:21,933 INFO [M:0;jenkins-hbase4:40563] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 04:11:21,933 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 04:11:21,933 DEBUG [M:0;jenkins-hbase4:40563] zookeeper.ZKUtil(398): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 04:11:21,933 WARN [M:0;jenkins-hbase4:40563] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 04:11:21,933 INFO [M:0;jenkins-hbase4:40563] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 04:11:21,934 INFO [M:0;jenkins-hbase4:40563] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 04:11:21,934 DEBUG [M:0;jenkins-hbase4:40563] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 04:11:21,934 INFO [M:0;jenkins-hbase4:40563] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:11:21,934 DEBUG [M:0;jenkins-hbase4:40563] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:11:21,934 DEBUG [M:0;jenkins-hbase4:40563] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 04:11:21,934 DEBUG [M:0;jenkins-hbase4:40563] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:11:21,935 INFO [M:0;jenkins-hbase4:40563] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=44.95 KB heapSize=54.44 KB 2023-07-24 04:11:21,958 INFO [M:0;jenkins-hbase4:40563] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=44.95 KB at sequenceid=909 (bloomFilter=true), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/09960afae71947bf89914bb227a6fd52 2023-07-24 04:11:21,965 DEBUG [M:0;jenkins-hbase4:40563] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/09960afae71947bf89914bb227a6fd52 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/09960afae71947bf89914bb227a6fd52 2023-07-24 04:11:21,972 INFO [M:0;jenkins-hbase4:40563] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/09960afae71947bf89914bb227a6fd52, entries=13, sequenceid=909, filesize=7.2 K 2023-07-24 04:11:21,973 INFO [M:0;jenkins-hbase4:40563] regionserver.HRegion(2948): Finished flush of dataSize ~44.95 KB/46033, heapSize ~54.42 KB/55728, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 39ms, sequenceid=909, compaction requested=false 2023-07-24 04:11:21,977 INFO [M:0;jenkins-hbase4:40563] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:11:21,978 DEBUG [M:0;jenkins-hbase4:40563] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 04:11:21,981 INFO [M:0;jenkins-hbase4:40563] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 04:11:21,981 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:21,982 INFO [M:0;jenkins-hbase4:40563] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40563 2023-07-24 04:11:21,983 DEBUG [M:0;jenkins-hbase4:40563] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40563,1690171872205 already deleted, retry=false 2023-07-24 04:11:22,270 INFO [M:0;jenkins-hbase4:40563] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40563,1690171872205; zookeeper connection closed. 2023-07-24 04:11:22,270 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:22,271 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:40563-0x10195863d980010, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:22,371 INFO [RS:0;jenkins-hbase4:46039] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46039,1690171872278; zookeeper connection closed. 2023-07-24 04:11:22,371 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:22,371 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46039-0x10195863d980011, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:22,371 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@421a72de] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@421a72de 2023-07-24 04:11:22,471 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:22,471 INFO [RS:1;jenkins-hbase4:43611] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43611,1690171872392; zookeeper connection closed. 2023-07-24 04:11:22,471 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43611-0x10195863d980012, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:22,471 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@859b59f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@859b59f 2023-07-24 04:11:22,571 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:22,571 INFO [RS:2;jenkins-hbase4:43857] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43857,1690171872444; zookeeper connection closed. 2023-07-24 04:11:22,571 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:43857-0x10195863d980013, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:22,590 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5e7f4584] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5e7f4584 2023-07-24 04:11:22,591 INFO [Listener at localhost/41307] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-24 04:11:22,591 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-24 04:11:24,190 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 04:11:24,593 INFO [Listener at localhost/41307] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:11:24,593 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:24,593 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:24,593 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:11:24,593 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:24,593 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:11:24,593 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:11:24,594 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37329 2023-07-24 04:11:24,594 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:24,596 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:24,597 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37329 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:11:24,601 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:373290x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:11:24,602 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37329-0x10195863d98001c connected 2023-07-24 04:11:24,604 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:11:24,605 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:24,605 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:11:24,606 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37329 2023-07-24 04:11:24,610 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37329 2023-07-24 04:11:24,610 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37329 2023-07-24 04:11:24,611 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37329 2023-07-24 04:11:24,611 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37329 2023-07-24 04:11:24,613 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:11:24,613 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:11:24,613 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:11:24,614 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 04:11:24,614 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:11:24,614 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:11:24,614 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:11:24,615 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 45469 2023-07-24 04:11:24,615 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:11:24,620 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:24,620 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@18ea3f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:11:24,621 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:24,621 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6ec8d0d4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:11:24,627 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:11:24,628 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:11:24,628 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:11:24,628 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 04:11:24,629 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:24,631 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@262b79d3{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 04:11:24,632 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@18454b02{HTTP/1.1, (http/1.1)}{0.0.0.0:45469} 2023-07-24 04:11:24,632 INFO [Listener at localhost/41307] server.Server(415): Started @40932ms 2023-07-24 04:11:24,633 INFO [Listener at localhost/41307] master.HMaster(444): hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca, hbase.cluster.distributed=false 2023-07-24 04:11:24,645 DEBUG [pool-521-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-24 04:11:24,651 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:11:24,651 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:24,652 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:24,652 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:11:24,652 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:24,652 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:11:24,652 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:11:24,653 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40545 2023-07-24 04:11:24,653 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:11:24,656 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:11:24,656 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:24,658 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:24,659 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40545 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:11:24,666 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:405450x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:11:24,667 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:405450x0, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:11:24,668 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40545-0x10195863d98001d connected 2023-07-24 04:11:24,668 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:24,668 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:11:24,669 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40545 2023-07-24 04:11:24,669 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40545 2023-07-24 04:11:24,669 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40545 2023-07-24 04:11:24,673 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40545 2023-07-24 04:11:24,674 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40545 2023-07-24 04:11:24,675 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:11:24,676 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:11:24,676 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:11:24,676 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:11:24,676 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:11:24,676 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:11:24,676 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:11:24,677 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 36387 2023-07-24 04:11:24,677 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:11:24,681 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:24,681 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@678c6f48{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:11:24,682 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:24,682 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6dad5212{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:11:24,688 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:11:24,689 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:11:24,689 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:11:24,689 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 04:11:24,691 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:24,692 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1494140{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:24,694 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@3deddf17{HTTP/1.1, (http/1.1)}{0.0.0.0:36387} 2023-07-24 04:11:24,695 INFO [Listener at localhost/41307] server.Server(415): Started @40994ms 2023-07-24 04:11:24,707 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:11:24,707 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:24,707 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:24,707 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:11:24,707 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:24,707 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:11:24,708 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:11:24,708 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46393 2023-07-24 04:11:24,709 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:11:24,710 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:11:24,711 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:24,712 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:24,713 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46393 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:11:24,718 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:463930x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:11:24,719 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:463930x0, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:11:24,719 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46393-0x10195863d98001e connected 2023-07-24 04:11:24,720 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:24,720 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:11:24,720 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46393 2023-07-24 04:11:24,721 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46393 2023-07-24 04:11:24,721 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46393 2023-07-24 04:11:24,722 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46393 2023-07-24 04:11:24,722 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46393 2023-07-24 04:11:24,724 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:11:24,724 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:11:24,724 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:11:24,725 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:11:24,725 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:11:24,725 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:11:24,725 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:11:24,725 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 34323 2023-07-24 04:11:24,725 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:11:24,727 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:24,727 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@11af7c60{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:11:24,727 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:24,727 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2a617046{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:11:24,733 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:11:24,733 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:11:24,734 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:11:24,734 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 04:11:24,735 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:24,736 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3f905ed2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:24,737 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@6d8fca7f{HTTP/1.1, (http/1.1)}{0.0.0.0:34323} 2023-07-24 04:11:24,738 INFO [Listener at localhost/41307] server.Server(415): Started @41037ms 2023-07-24 04:11:24,749 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:11:24,749 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:24,749 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:24,749 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:11:24,749 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:24,750 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:11:24,750 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:11:24,750 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44573 2023-07-24 04:11:24,751 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:11:24,753 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:11:24,753 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:24,754 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:24,755 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44573 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:11:24,761 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:445730x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:11:24,762 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44573-0x10195863d98001f connected 2023-07-24 04:11:24,763 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:11:24,763 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:24,764 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:11:24,764 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44573 2023-07-24 04:11:24,764 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44573 2023-07-24 04:11:24,765 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44573 2023-07-24 04:11:24,765 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44573 2023-07-24 04:11:24,765 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44573 2023-07-24 04:11:24,767 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:11:24,767 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:11:24,767 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:11:24,768 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:11:24,768 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:11:24,768 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:11:24,768 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:11:24,768 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 34781 2023-07-24 04:11:24,768 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:11:24,771 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:24,771 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@48ab91ed{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:11:24,772 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:24,772 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@67a2054{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:11:24,777 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:11:24,777 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:11:24,778 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:11:24,778 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 04:11:24,779 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:24,779 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5044b346{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:24,780 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@4325af24{HTTP/1.1, (http/1.1)}{0.0.0.0:34781} 2023-07-24 04:11:24,780 INFO [Listener at localhost/41307] server.Server(415): Started @41080ms 2023-07-24 04:11:24,782 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:11:24,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@26c7bd70{HTTP/1.1, (http/1.1)}{0.0.0.0:41383} 2023-07-24 04:11:24,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @41087ms 2023-07-24 04:11:24,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:24,789 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 04:11:24,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:24,791 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 04:11:24,791 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 04:11:24,791 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:11:24,791 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 04:11:24,791 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 04:11:24,794 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 04:11:24,795 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 04:11:24,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37329,1690171884592 from backup master directory 2023-07-24 04:11:24,797 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:24,797 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 04:11:24,797 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:11:24,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:24,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:24,837 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7ced5fa0 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:24,841 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51b21a8b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:24,841 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 04:11:24,842 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 04:11:24,842 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:24,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,40563,1690171872205 to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,40563,1690171872205-dead as it is dead 2023-07-24 04:11:24,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,40563,1690171872205-dead/jenkins-hbase4.apache.org%2C40563%2C1690171872205.1690171872592 2023-07-24 04:11:24,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,40563,1690171872205-dead/jenkins-hbase4.apache.org%2C40563%2C1690171872205.1690171872592 after 1ms 2023-07-24 04:11:24,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,40563,1690171872205-dead/jenkins-hbase4.apache.org%2C40563%2C1690171872205.1690171872592 to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C40563%2C1690171872205.1690171872592 2023-07-24 04:11:24,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,40563,1690171872205-dead 2023-07-24 04:11:24,849 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:24,851 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37329%2C1690171884592, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,37329,1690171884592, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/oldWALs, maxLogs=10 2023-07-24 04:11:24,864 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:11:24,864 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:11:24,864 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:11:24,867 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/WALs/jenkins-hbase4.apache.org,37329,1690171884592/jenkins-hbase4.apache.org%2C37329%2C1690171884592.1690171884851 2023-07-24 04:11:24,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK]] 2023-07-24 04:11:24,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:24,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:24,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:11:24,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:11:24,870 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:11:24,871 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 04:11:24,871 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 04:11:24,879 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/09960afae71947bf89914bb227a6fd52 2023-07-24 04:11:24,883 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2ec2cd2579e74f859cf29716c9c6d781 2023-07-24 04:11:24,883 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:24,884 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-24 04:11:24,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C40563%2C1690171872205.1690171872592 2023-07-24 04:11:24,889 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 127, firstSequenceIdInLog=800, maxSequenceIdInLog=911, path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C40563%2C1690171872205.1690171872592 2023-07-24 04:11:24,890 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C40563%2C1690171872205.1690171872592 2023-07-24 04:11:24,893 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 04:11:24,896 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/911.seqid, newMaxSeqId=911, maxSeqId=798 2023-07-24 04:11:24,897 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=912; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10314339520, jitterRate=-0.039402276277542114}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:11:24,897 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 04:11:24,897 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 04:11:24,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 04:11:24,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 04:11:24,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 04:11:24,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 04:11:24,910 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-24 04:11:24,911 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-24 04:11:24,911 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-24 04:11:24,911 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-24 04:11:24,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-24 04:11:24,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,36109,1690171852137, splitWal=true, meta=false 2023-07-24 04:11:24,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=13, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-24 04:11:24,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=14, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:11:24,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=17, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:11:24,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=20, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 04:11:24,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=21, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:11:24,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=42, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:11:24,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=63, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 04:11:24,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=64, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 04:11:24,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=67, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-24 04:11:24,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=68, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:24,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=71, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:24,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=74, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 04:11:24,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=75, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 04:11:24,916 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=76, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:24,916 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=79, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:24,916 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=82, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 04:11:24,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=83, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:24,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=86, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-24 04:11:24,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=87, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-24 04:11:24,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=88, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690171864804 type: FLUSH version: 2 ttl: 0 ) 2023-07-24 04:11:24,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=91, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:24,918 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=94, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 04:11:24,918 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=95, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 04:11:24,918 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=98, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 04:11:24,918 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=99, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-24 04:11:24,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=100, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:24,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=101, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:24,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=104, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:24,920 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 04:11:24,920 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=108, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 04:11:24,920 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=109, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,41157,1690171852333, splitWal=true, meta=false 2023-07-24 04:11:24,920 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=110, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,39717,1690171855814, splitWal=true, meta=true 2023-07-24 04:11:24,920 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=111, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,43785,1690171856375, splitWal=true, meta=false 2023-07-24 04:11:24,921 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=112, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,37679,1690171852273, splitWal=true, meta=false 2023-07-24 04:11:24,921 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=119, state=SUCCESS; CreateTableProcedure table=hbase:quota 2023-07-24 04:11:24,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 21 msec 2023-07-24 04:11:24,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 04:11:24,922 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-24 04:11:24,923 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase4.apache.org,46039,1690171872278, table=hbase:meta, region=1588230740 2023-07-24 04:11:24,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 3 possibly 'live' servers, and 0 'splitting'. 2023-07-24 04:11:24,925 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43857,1690171872444 already deleted, retry=false 2023-07-24 04:11:24,925 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,43857,1690171872444 on jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:24,926 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=122, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,43857,1690171872444, splitWal=true, meta=false 2023-07-24 04:11:24,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=122 for jenkins-hbase4.apache.org,43857,1690171872444 (carryingMeta=false) jenkins-hbase4.apache.org,43857,1690171872444/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@35aa3bc1[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 04:11:24,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43611,1690171872392 already deleted, retry=false 2023-07-24 04:11:24,929 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,43611,1690171872392 on jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:24,929 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,43611,1690171872392, splitWal=true, meta=false 2023-07-24 04:11:24,929 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=123 for jenkins-hbase4.apache.org,43611,1690171872392 (carryingMeta=false) jenkins-hbase4.apache.org,43611,1690171872392/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@75229e80[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 04:11:24,930 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46039,1690171872278 already deleted, retry=false 2023-07-24 04:11:24,930 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,46039,1690171872278 on jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:24,931 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=124, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,46039,1690171872278, splitWal=true, meta=true 2023-07-24 04:11:24,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=124 for jenkins-hbase4.apache.org,46039,1690171872278 (carryingMeta=true) jenkins-hbase4.apache.org,46039,1690171872278/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@1d7d9f0c[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 04:11:24,932 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-24 04:11:24,932 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 04:11:24,933 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 04:11:24,936 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 04:11:24,936 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 04:11:24,937 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 04:11:24,938 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:24,938 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:24,938 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:11:24,938 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:24,939 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:24,939 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37329,1690171884592, sessionid=0x10195863d98001c, setting cluster-up flag (Was=false) 2023-07-24 04:11:24,944 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 04:11:24,945 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:24,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 04:11:24,949 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:24,950 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/.hbase-snapshot/.tmp 2023-07-24 04:11:24,950 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 04:11:24,951 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 04:11:24,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-24 04:11:24,953 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 04:11:24,953 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 04:11:24,954 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:11:24,955 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:24,956 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:46039 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:46039 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:11:24,958 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:46039 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:46039 2023-07-24 04:11:24,965 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 04:11:24,965 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 04:11:24,965 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 04:11:24,965 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 04:11:24,965 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 04:11:24,965 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 04:11:24,965 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 04:11:24,965 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 04:11:24,965 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 04:11:24,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:24,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:11:24,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:24,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690171914975 2023-07-24 04:11:24,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 04:11:24,976 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46039,1690171872278; numProcessing=1 2023-07-24 04:11:24,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 04:11:24,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 04:11:24,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 04:11:24,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 04:11:24,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 04:11:24,976 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=124, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,46039,1690171872278, splitWal=true, meta=true 2023-07-24 04:11:24,978 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:24,979 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 04:11:24,979 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43857,1690171872444; numProcessing=2 2023-07-24 04:11:24,979 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 04:11:24,979 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=124, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,46039,1690171872278, splitWal=true, meta=true, isMeta: true 2023-07-24 04:11:24,979 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 04:11:24,979 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=122, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,43857,1690171872444, splitWal=true, meta=false 2023-07-24 04:11:24,979 DEBUG [PEWorker-3] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43611,1690171872392; numProcessing=3 2023-07-24 04:11:24,980 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=123, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,43611,1690171872392, splitWal=true, meta=false 2023-07-24 04:11:24,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 04:11:24,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 04:11:24,981 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46039,1690171872278-splitting 2023-07-24 04:11:24,982 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46039,1690171872278-splitting dir is empty, no logs to split. 2023-07-24 04:11:24,982 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,46039,1690171872278 WAL count=0, meta=true 2023-07-24 04:11:24,984 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690171884980,5,FailOnTimeoutGroup] 2023-07-24 04:11:24,985 INFO [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:11:24,987 DEBUG [RS:0;jenkins-hbase4:40545] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:11:24,987 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:11:24,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690171884987,5,FailOnTimeoutGroup] 2023-07-24 04:11:24,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:24,989 DEBUG [RS:1;jenkins-hbase4:46393] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:11:24,989 INFO [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:11:24,990 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 04:11:24,990 DEBUG [RS:2;jenkins-hbase4:44573] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:11:24,990 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46039,1690171872278-splitting dir is empty, no logs to split. 2023-07-24 04:11:24,990 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:24,991 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,46039,1690171872278 WAL count=0, meta=true 2023-07-24 04:11:24,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:24,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690171884991, completionTime=-1 2023-07-24 04:11:24,991 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-24 04:11:24,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-24 04:11:24,991 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,46039,1690171872278 WAL splitting is done? wals=0, meta=true 2023-07-24 04:11:24,991 DEBUG [RS:1;jenkins-hbase4:46393] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:11:24,991 DEBUG [RS:0;jenkins-hbase4:40545] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:11:24,991 DEBUG [RS:0;jenkins-hbase4:40545] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:11:24,991 DEBUG [RS:1;jenkins-hbase4:46393] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:11:24,992 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=124, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 04:11:24,994 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=125, ppid=124, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 04:11:24,995 DEBUG [RS:2;jenkins-hbase4:44573] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:11:24,995 DEBUG [RS:2;jenkins-hbase4:44573] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:11:24,996 DEBUG [RS:0;jenkins-hbase4:40545] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:11:24,997 DEBUG [RS:0;jenkins-hbase4:40545] zookeeper.ReadOnlyZKClient(139): Connect 0x49e53547 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:24,997 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=125, ppid=124, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 04:11:24,998 DEBUG [RS:1;jenkins-hbase4:46393] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:11:24,998 DEBUG [RS:2;jenkins-hbase4:44573] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:11:25,000 DEBUG [RS:1;jenkins-hbase4:46393] zookeeper.ReadOnlyZKClient(139): Connect 0x18bbef2f to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:25,000 DEBUG [RS:2;jenkins-hbase4:44573] zookeeper.ReadOnlyZKClient(139): Connect 0x3d4078ed to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:25,004 DEBUG [RS:0;jenkins-hbase4:40545] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4722d20b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:25,004 DEBUG [RS:0;jenkins-hbase4:40545] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@431d5e37, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:11:25,006 DEBUG [RS:2;jenkins-hbase4:44573] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33878135, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:25,006 DEBUG [RS:1;jenkins-hbase4:46393] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@167c0248, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:25,006 DEBUG [RS:2;jenkins-hbase4:44573] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37b8139d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:11:25,006 DEBUG [RS:1;jenkins-hbase4:46393] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1243845f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:11:25,012 DEBUG [RS:0;jenkins-hbase4:40545] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:40545 2023-07-24 04:11:25,013 INFO [RS:0;jenkins-hbase4:40545] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:11:25,013 INFO [RS:0;jenkins-hbase4:40545] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:11:25,013 DEBUG [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:11:25,013 INFO [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37329,1690171884592 with isa=jenkins-hbase4.apache.org/172.31.14.131:40545, startcode=1690171884651 2023-07-24 04:11:25,013 DEBUG [RS:0;jenkins-hbase4:40545] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:11:25,014 DEBUG [RS:1;jenkins-hbase4:46393] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46393 2023-07-24 04:11:25,014 INFO [RS:1;jenkins-hbase4:46393] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:11:25,014 INFO [RS:1;jenkins-hbase4:46393] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:11:25,014 DEBUG [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:11:25,014 DEBUG [RS:2;jenkins-hbase4:44573] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:44573 2023-07-24 04:11:25,014 INFO [RS:2;jenkins-hbase4:44573] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:11:25,014 INFO [RS:2;jenkins-hbase4:44573] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:11:25,014 DEBUG [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:11:25,015 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37329,1690171884592 with isa=jenkins-hbase4.apache.org/172.31.14.131:46393, startcode=1690171884706 2023-07-24 04:11:25,015 DEBUG [RS:1;jenkins-hbase4:46393] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:11:25,015 INFO [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37329,1690171884592 with isa=jenkins-hbase4.apache.org/172.31.14.131:44573, startcode=1690171884749 2023-07-24 04:11:25,015 DEBUG [RS:2;jenkins-hbase4:44573] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:11:25,019 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33345, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:11:25,019 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37789, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:11:25,019 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42163, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:11:25,020 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37329] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:25,021 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:11:25,021 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 04:11:25,021 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37329] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:25,021 DEBUG [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:11:25,022 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37329] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:25,022 DEBUG [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:11:25,021 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:11:25,022 DEBUG [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:11:25,022 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 04:11:25,022 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:11:25,022 DEBUG [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45469 2023-07-24 04:11:25,022 DEBUG [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:11:25,022 DEBUG [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:11:25,023 DEBUG [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:11:25,023 DEBUG [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45469 2023-07-24 04:11:25,023 DEBUG [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45469 2023-07-24 04:11:25,024 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:25,026 DEBUG [RS:0;jenkins-hbase4:40545] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:25,026 WARN [RS:0;jenkins-hbase4:40545] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:11:25,026 INFO [RS:0;jenkins-hbase4:40545] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:25,026 DEBUG [RS:2;jenkins-hbase4:44573] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:25,026 DEBUG [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:25,026 DEBUG [RS:1;jenkins-hbase4:46393] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:25,026 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40545,1690171884651] 2023-07-24 04:11:25,026 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46393,1690171884706] 2023-07-24 04:11:25,026 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44573,1690171884749] 2023-07-24 04:11:25,026 WARN [RS:2;jenkins-hbase4:44573] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:11:25,026 WARN [RS:1;jenkins-hbase4:46393] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:11:25,026 INFO [RS:1;jenkins-hbase4:46393] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:25,026 INFO [RS:2;jenkins-hbase4:44573] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:25,027 DEBUG [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:25,027 DEBUG [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:25,033 DEBUG [RS:0;jenkins-hbase4:40545] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:25,034 DEBUG [RS:0;jenkins-hbase4:40545] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:25,034 DEBUG [RS:2;jenkins-hbase4:44573] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:25,034 DEBUG [RS:1;jenkins-hbase4:46393] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:25,034 DEBUG [RS:0;jenkins-hbase4:40545] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:25,034 DEBUG [RS:2;jenkins-hbase4:44573] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:25,034 DEBUG [RS:1;jenkins-hbase4:46393] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:25,034 DEBUG [RS:2;jenkins-hbase4:44573] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:25,035 DEBUG [RS:1;jenkins-hbase4:46393] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:25,035 DEBUG [RS:0;jenkins-hbase4:40545] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:11:25,035 INFO [RS:0;jenkins-hbase4:40545] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:11:25,035 DEBUG [RS:2;jenkins-hbase4:44573] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:11:25,035 DEBUG [RS:1;jenkins-hbase4:46393] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:11:25,036 INFO [RS:2;jenkins-hbase4:44573] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:11:25,036 INFO [RS:1;jenkins-hbase4:46393] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:11:25,038 INFO [RS:0;jenkins-hbase4:40545] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:11:25,038 INFO [RS:2;jenkins-hbase4:44573] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:11:25,041 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=50ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-24 04:11:25,043 INFO [RS:0;jenkins-hbase4:40545] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:11:25,043 INFO [RS:0;jenkins-hbase4:40545] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,043 INFO [RS:2;jenkins-hbase4:44573] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:11:25,043 INFO [RS:1;jenkins-hbase4:46393] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:11:25,043 INFO [RS:2;jenkins-hbase4:44573] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,046 INFO [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:11:25,047 INFO [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:11:25,048 INFO [RS:1;jenkins-hbase4:46393] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:11:25,048 INFO [RS:1;jenkins-hbase4:46393] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,048 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:11:25,049 INFO [RS:0;jenkins-hbase4:40545] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,049 INFO [RS:2;jenkins-hbase4:44573] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,049 DEBUG [RS:0;jenkins-hbase4:40545] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,049 DEBUG [RS:2;jenkins-hbase4:44573] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,049 DEBUG [RS:0;jenkins-hbase4:40545] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,049 DEBUG [RS:2;jenkins-hbase4:44573] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,049 DEBUG [RS:0;jenkins-hbase4:40545] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,049 DEBUG [RS:2;jenkins-hbase4:44573] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:0;jenkins-hbase4:40545] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:2;jenkins-hbase4:44573] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:0;jenkins-hbase4:40545] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:2;jenkins-hbase4:44573] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:0;jenkins-hbase4:40545] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:11:25,050 DEBUG [RS:2;jenkins-hbase4:44573] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:11:25,050 DEBUG [RS:0;jenkins-hbase4:40545] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:2;jenkins-hbase4:44573] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:0;jenkins-hbase4:40545] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 INFO [RS:1;jenkins-hbase4:46393] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,050 DEBUG [RS:0;jenkins-hbase4:40545] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:2;jenkins-hbase4:44573] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:0;jenkins-hbase4:40545] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:2;jenkins-hbase4:44573] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:1;jenkins-hbase4:46393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:2;jenkins-hbase4:44573] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,050 DEBUG [RS:1;jenkins-hbase4:46393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,051 INFO [RS:0;jenkins-hbase4:40545] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,051 DEBUG [RS:1;jenkins-hbase4:46393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,051 INFO [RS:0;jenkins-hbase4:40545] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,051 DEBUG [RS:1;jenkins-hbase4:46393] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,051 INFO [RS:0;jenkins-hbase4:40545] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,051 INFO [RS:2;jenkins-hbase4:44573] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,051 DEBUG [RS:1;jenkins-hbase4:46393] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,051 INFO [RS:2;jenkins-hbase4:44573] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,051 DEBUG [RS:1;jenkins-hbase4:46393] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:11:25,051 INFO [RS:2;jenkins-hbase4:44573] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,051 DEBUG [RS:1;jenkins-hbase4:46393] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,052 DEBUG [RS:1;jenkins-hbase4:46393] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,052 DEBUG [RS:1;jenkins-hbase4:46393] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,052 DEBUG [RS:1;jenkins-hbase4:46393] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:25,058 INFO [RS:1;jenkins-hbase4:46393] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,058 INFO [RS:1;jenkins-hbase4:46393] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,058 INFO [RS:1;jenkins-hbase4:46393] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,059 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:46039 this server is in the failed servers list 2023-07-24 04:11:25,068 INFO [RS:0;jenkins-hbase4:40545] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:11:25,068 INFO [RS:0;jenkins-hbase4:40545] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40545,1690171884651-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,070 INFO [RS:2;jenkins-hbase4:44573] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:11:25,070 INFO [RS:2;jenkins-hbase4:44573] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44573,1690171884749-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,077 INFO [RS:1;jenkins-hbase4:46393] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:11:25,077 INFO [RS:1;jenkins-hbase4:46393] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46393,1690171884706-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:25,079 INFO [RS:0;jenkins-hbase4:40545] regionserver.Replication(203): jenkins-hbase4.apache.org,40545,1690171884651 started 2023-07-24 04:11:25,079 INFO [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40545,1690171884651, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40545, sessionid=0x10195863d98001d 2023-07-24 04:11:25,079 DEBUG [RS:0;jenkins-hbase4:40545] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:11:25,079 DEBUG [RS:0;jenkins-hbase4:40545] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:25,079 DEBUG [RS:0;jenkins-hbase4:40545] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40545,1690171884651' 2023-07-24 04:11:25,079 DEBUG [RS:0;jenkins-hbase4:40545] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:11:25,079 DEBUG [RS:0;jenkins-hbase4:40545] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:11:25,080 DEBUG [RS:0;jenkins-hbase4:40545] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:11:25,080 DEBUG [RS:0;jenkins-hbase4:40545] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:11:25,080 DEBUG [RS:0;jenkins-hbase4:40545] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:25,080 DEBUG [RS:0;jenkins-hbase4:40545] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40545,1690171884651' 2023-07-24 04:11:25,080 DEBUG [RS:0;jenkins-hbase4:40545] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:25,080 DEBUG [RS:0;jenkins-hbase4:40545] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:25,080 DEBUG [RS:0;jenkins-hbase4:40545] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:11:25,080 INFO [RS:0;jenkins-hbase4:40545] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 04:11:25,080 INFO [RS:0;jenkins-hbase4:40545] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 04:11:25,086 INFO [RS:2;jenkins-hbase4:44573] regionserver.Replication(203): jenkins-hbase4.apache.org,44573,1690171884749 started 2023-07-24 04:11:25,086 INFO [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44573,1690171884749, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44573, sessionid=0x10195863d98001f 2023-07-24 04:11:25,086 DEBUG [RS:2;jenkins-hbase4:44573] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:11:25,086 DEBUG [RS:2;jenkins-hbase4:44573] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:25,086 DEBUG [RS:2;jenkins-hbase4:44573] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44573,1690171884749' 2023-07-24 04:11:25,086 DEBUG [RS:2;jenkins-hbase4:44573] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:11:25,086 DEBUG [RS:2;jenkins-hbase4:44573] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:11:25,087 DEBUG [RS:2;jenkins-hbase4:44573] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:11:25,087 DEBUG [RS:2;jenkins-hbase4:44573] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:11:25,087 DEBUG [RS:2;jenkins-hbase4:44573] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:25,087 DEBUG [RS:2;jenkins-hbase4:44573] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44573,1690171884749' 2023-07-24 04:11:25,087 DEBUG [RS:2;jenkins-hbase4:44573] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:25,088 DEBUG [RS:2;jenkins-hbase4:44573] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:25,088 DEBUG [RS:2;jenkins-hbase4:44573] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:11:25,088 INFO [RS:2;jenkins-hbase4:44573] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 04:11:25,088 INFO [RS:2;jenkins-hbase4:44573] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 04:11:25,092 INFO [RS:1;jenkins-hbase4:46393] regionserver.Replication(203): jenkins-hbase4.apache.org,46393,1690171884706 started 2023-07-24 04:11:25,092 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46393,1690171884706, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46393, sessionid=0x10195863d98001e 2023-07-24 04:11:25,092 DEBUG [RS:1;jenkins-hbase4:46393] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:11:25,092 DEBUG [RS:1;jenkins-hbase4:46393] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:25,092 DEBUG [RS:1;jenkins-hbase4:46393] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46393,1690171884706' 2023-07-24 04:11:25,092 DEBUG [RS:1;jenkins-hbase4:46393] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:11:25,093 DEBUG [RS:1;jenkins-hbase4:46393] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:11:25,093 DEBUG [RS:1;jenkins-hbase4:46393] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:11:25,093 DEBUG [RS:1;jenkins-hbase4:46393] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:11:25,093 DEBUG [RS:1;jenkins-hbase4:46393] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:25,093 DEBUG [RS:1;jenkins-hbase4:46393] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46393,1690171884706' 2023-07-24 04:11:25,093 DEBUG [RS:1;jenkins-hbase4:46393] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:25,094 DEBUG [RS:1;jenkins-hbase4:46393] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:25,094 DEBUG [RS:1;jenkins-hbase4:46393] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:11:25,094 INFO [RS:1;jenkins-hbase4:46393] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 04:11:25,094 INFO [RS:1;jenkins-hbase4:46393] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 04:11:25,147 DEBUG [jenkins-hbase4:37329] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 04:11:25,148 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:11:25,148 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:11:25,148 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:11:25,148 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:11:25,148 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:11:25,149 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46393,1690171884706, state=OPENING 2023-07-24 04:11:25,151 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 04:11:25,151 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 04:11:25,152 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=126, ppid=125, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46393,1690171884706}] 2023-07-24 04:11:25,184 INFO [RS:0;jenkins-hbase4:40545] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40545%2C1690171884651, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,40545,1690171884651, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:11:25,190 INFO [RS:2;jenkins-hbase4:44573] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44573%2C1690171884749, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,44573,1690171884749, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:11:25,196 INFO [RS:1;jenkins-hbase4:46393] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46393%2C1690171884706, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46393,1690171884706, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:11:25,211 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:11:25,212 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:11:25,212 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:11:25,224 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:11:25,225 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:11:25,225 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:11:25,231 INFO [RS:0;jenkins-hbase4:40545] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,40545,1690171884651/jenkins-hbase4.apache.org%2C40545%2C1690171884651.1690171885184 2023-07-24 04:11:25,236 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:11:25,236 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:11:25,236 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:11:25,241 DEBUG [RS:0;jenkins-hbase4:40545] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK]] 2023-07-24 04:11:25,247 INFO [RS:1;jenkins-hbase4:46393] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46393,1690171884706/jenkins-hbase4.apache.org%2C46393%2C1690171884706.1690171885196 2023-07-24 04:11:25,248 DEBUG [RS:1;jenkins-hbase4:46393] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK], DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK]] 2023-07-24 04:11:25,248 INFO [RS:2;jenkins-hbase4:44573] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,44573,1690171884749/jenkins-hbase4.apache.org%2C44573%2C1690171884749.1690171885190 2023-07-24 04:11:25,248 DEBUG [RS:2;jenkins-hbase4:44573] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK]] 2023-07-24 04:11:25,261 WARN [ReadOnlyZKClient-127.0.0.1:59235@0x7ced5fa0] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 04:11:25,261 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:25,264 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53696, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:11:25,265 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46393] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:53696 deadline: 1690171945265, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:25,308 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:25,310 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:11:25,313 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53708, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:11:25,318 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 04:11:25,318 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:25,320 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46393%2C1690171884706.meta, suffix=.meta, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46393,1690171884706, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:11:25,339 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:11:25,339 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:11:25,339 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:11:25,342 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46393,1690171884706/jenkins-hbase4.apache.org%2C46393%2C1690171884706.meta.1690171885321.meta 2023-07-24 04:11:25,342 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK]] 2023-07-24 04:11:25,343 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:25,343 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 04:11:25,343 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 04:11:25,343 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 04:11:25,343 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 04:11:25,343 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:25,343 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 04:11:25,344 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 04:11:25,345 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 04:11:25,347 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info 2023-07-24 04:11:25,347 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info 2023-07-24 04:11:25,347 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 04:11:25,362 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 35ed1307fbf449eea8d4667880d2c6b7 2023-07-24 04:11:25,362 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/35ed1307fbf449eea8d4667880d2c6b7 2023-07-24 04:11:25,367 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/de3310ea376f4898b8ea51fb19fe2f72 2023-07-24 04:11:25,376 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ea6a294b028040dcb802cfd24f5c7162 2023-07-24 04:11:25,376 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/ea6a294b028040dcb802cfd24f5c7162 2023-07-24 04:11:25,376 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:25,376 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 04:11:25,377 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier 2023-07-24 04:11:25,377 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier 2023-07-24 04:11:25,377 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 04:11:25,387 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 354c8876ee08418994b55326872ce722 2023-07-24 04:11:25,387 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier/354c8876ee08418994b55326872ce722 2023-07-24 04:11:25,391 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d38645e5353e4b70a81221f90b832aa9 2023-07-24 04:11:25,391 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/rep_barrier/d38645e5353e4b70a81221f90b832aa9 2023-07-24 04:11:25,391 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:25,392 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 04:11:25,392 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table 2023-07-24 04:11:25,392 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table 2023-07-24 04:11:25,393 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 04:11:25,399 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 763c92d71bbe40558f4f7141fc340072 2023-07-24 04:11:25,399 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/763c92d71bbe40558f4f7141fc340072 2023-07-24 04:11:25,403 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/d0bd780de0244b1c96eaeb749f1d60a0 2023-07-24 04:11:25,408 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e673da21eba54a61b6fc1007d80762bf 2023-07-24 04:11:25,408 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/e673da21eba54a61b6fc1007d80762bf 2023-07-24 04:11:25,409 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:25,410 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740 2023-07-24 04:11:25,411 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740 2023-07-24 04:11:25,413 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 04:11:25,414 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 04:11:25,415 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=159; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11643150080, jitterRate=0.08435285091400146}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 04:11:25,415 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 04:11:25,416 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=126, masterSystemTime=1690171885307 2023-07-24 04:11:25,418 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-24 04:11:25,419 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-07-24 04:11:25,423 DEBUG [RS:1;jenkins-hbase4:46393-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-24 04:11:25,423 DEBUG [RS:1;jenkins-hbase4:46393-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-24 04:11:25,425 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 04:11:25,426 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 04:11:25,427 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46393,1690171884706, state=OPEN 2023-07-24 04:11:25,428 DEBUG [RS:1;jenkins-hbase4:46393-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 16961 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-24 04:11:25,428 DEBUG [RS:1;jenkins-hbase4:46393-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 23640 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-24 04:11:25,428 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 04:11:25,429 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 04:11:25,431 DEBUG [RS:1;jenkins-hbase4:46393-shortCompactions-0] regionserver.HStore(1912): 1588230740/info is initiating minor compaction (all files) 2023-07-24 04:11:25,431 DEBUG [RS:1;jenkins-hbase4:46393-longCompactions-0] regionserver.HStore(1912): 1588230740/table is initiating minor compaction (all files) 2023-07-24 04:11:25,431 INFO [RS:1;jenkins-hbase4:46393-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/info in hbase:meta,,1.1588230740 2023-07-24 04:11:25,431 INFO [RS:1;jenkins-hbase4:46393-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/table in hbase:meta,,1.1588230740 2023-07-24 04:11:25,431 INFO [RS:1;jenkins-hbase4:46393-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/ea6a294b028040dcb802cfd24f5c7162, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/35ed1307fbf449eea8d4667880d2c6b7, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/de3310ea376f4898b8ea51fb19fe2f72] into tmpdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp, totalSize=23.1 K 2023-07-24 04:11:25,431 INFO [RS:1;jenkins-hbase4:46393-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/e673da21eba54a61b6fc1007d80762bf, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/763c92d71bbe40558f4f7141fc340072, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/d0bd780de0244b1c96eaeb749f1d60a0] into tmpdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp, totalSize=16.6 K 2023-07-24 04:11:25,431 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=126, resume processing ppid=125 2023-07-24 04:11:25,431 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=126, ppid=125, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46393,1690171884706 in 278 msec 2023-07-24 04:11:25,432 DEBUG [RS:1;jenkins-hbase4:46393-longCompactions-0] compactions.Compactor(207): Compacting e673da21eba54a61b6fc1007d80762bf, keycount=17, bloomtype=NONE, size=6.2 K, encoding=NONE, compression=NONE, seqNum=74, earliestPutTs=1690171854965 2023-07-24 04:11:25,432 DEBUG [RS:1;jenkins-hbase4:46393-shortCompactions-0] compactions.Compactor(207): Compacting ea6a294b028040dcb802cfd24f5c7162, keycount=42, bloomtype=NONE, size=9.5 K, encoding=NONE, compression=NONE, seqNum=74, earliestPutTs=1690171854914 2023-07-24 04:11:25,433 DEBUG [RS:1;jenkins-hbase4:46393-longCompactions-0] compactions.Compactor(207): Compacting 763c92d71bbe40558f4f7141fc340072, keycount=10, bloomtype=NONE, size=5.7 K, encoding=NONE, compression=NONE, seqNum=140, earliestPutTs=9223372036854775807 2023-07-24 04:11:25,433 DEBUG [RS:1;jenkins-hbase4:46393-shortCompactions-0] compactions.Compactor(207): Compacting 35ed1307fbf449eea8d4667880d2c6b7, keycount=10, bloomtype=NONE, size=5.9 K, encoding=NONE, compression=NONE, seqNum=140, earliestPutTs=9223372036854775807 2023-07-24 04:11:25,433 DEBUG [RS:1;jenkins-hbase4:46393-longCompactions-0] compactions.Compactor(207): Compacting d0bd780de0244b1c96eaeb749f1d60a0, keycount=2, bloomtype=NONE, size=4.7 K, encoding=NONE, compression=NONE, seqNum=155, earliestPutTs=1690171878364 2023-07-24 04:11:25,434 DEBUG [RS:1;jenkins-hbase4:46393-shortCompactions-0] compactions.Compactor(207): Compacting de3310ea376f4898b8ea51fb19fe2f72, keycount=26, bloomtype=NONE, size=7.7 K, encoding=NONE, compression=NONE, seqNum=155, earliestPutTs=1690171877331 2023-07-24 04:11:25,434 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=124 2023-07-24 04:11:25,434 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=124, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 439 msec 2023-07-24 04:11:25,454 INFO [RS:1;jenkins-hbase4:46393-longCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#table#compaction#13 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-24 04:11:25,457 INFO [RS:1;jenkins-hbase4:46393-shortCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#info#compaction#14 average throughput is 4.26 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-24 04:11:25,496 DEBUG [RS:1;jenkins-hbase4:46393-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/table/43d9f7dfedf145a3bc684cd65c4075fc as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/43d9f7dfedf145a3bc684cd65c4075fc 2023-07-24 04:11:25,498 DEBUG [RS:1;jenkins-hbase4:46393-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/info/88f775ffa33c4c08b22ddeaa1cc3eec2 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/88f775ffa33c4c08b22ddeaa1cc3eec2 2023-07-24 04:11:25,519 DEBUG [RS:1;jenkins-hbase4:46393-longCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 04:11:25,519 DEBUG [RS:1;jenkins-hbase4:46393-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 04:11:25,521 INFO [RS:1;jenkins-hbase4:46393-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/table of 1588230740 into 43d9f7dfedf145a3bc684cd65c4075fc(size=4.9 K), total size for store is 4.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-24 04:11:25,521 DEBUG [RS:1;jenkins-hbase4:46393-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-24 04:11:25,522 INFO [RS:1;jenkins-hbase4:46393-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/table, priority=13, startTime=1690171885419; duration=0sec 2023-07-24 04:11:25,522 INFO [RS:1;jenkins-hbase4:46393-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/info of 1588230740 into 88f775ffa33c4c08b22ddeaa1cc3eec2(size=9.2 K), total size for store is 9.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-24 04:11:25,522 DEBUG [RS:1;jenkins-hbase4:46393-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-24 04:11:25,522 INFO [RS:1;jenkins-hbase4:46393-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/info, priority=13, startTime=1690171885418; duration=0sec 2023-07-24 04:11:25,523 DEBUG [RS:1;jenkins-hbase4:46393-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-24 04:11:25,523 DEBUG [RS:1;jenkins-hbase4:46393-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-24 04:11:25,585 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:25,586 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:43611 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43611 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:11:25,586 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:43611 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43611 2023-07-24 04:11:25,693 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:43611 this server is in the failed servers list 2023-07-24 04:11:25,897 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:43611 this server is in the failed servers list 2023-07-24 04:11:26,201 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:43611 this server is in the failed servers list 2023-07-24 04:11:26,544 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1553ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1503ms 2023-07-24 04:11:26,706 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:43611 this server is in the failed servers list 2023-07-24 04:11:27,712 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:43611 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43611 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:11:27,713 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:43611 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43611 2023-07-24 04:11:28,046 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3055ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3005ms 2023-07-24 04:11:29,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4508ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-24 04:11:29,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 04:11:29,501 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=73e1052e9bc949a33667944e6caa42b4, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,46039,1690171872278, regionLocation=jenkins-hbase4.apache.org,46039,1690171872278, openSeqNum=15 2023-07-24 04:11:29,502 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=6d1026a50e3a812feaa5fb2336097299, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,43857,1690171872444, regionLocation=jenkins-hbase4.apache.org,43857,1690171872444, openSeqNum=2 2023-07-24 04:11:29,502 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=6aa1ab126d58dcf7d835257119c9304f, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,43611,1690171872392, regionLocation=jenkins-hbase4.apache.org,43611,1690171872392, openSeqNum=71 2023-07-24 04:11:29,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 04:11:29,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690171949502 2023-07-24 04:11:29,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690172009502 2023-07-24 04:11:29,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 2 msec 2023-07-24 04:11:29,517 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,46039,1690171872278 had 2 regions 2023-07-24 04:11:29,518 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,43857,1690171872444 had 1 regions 2023-07-24 04:11:29,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37329,1690171884592-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:29,518 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,43611,1690171872392 had 1 regions 2023-07-24 04:11:29,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37329,1690171884592-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:29,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37329,1690171884592-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:29,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37329, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:29,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:29,519 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. is NOT online; state={73e1052e9bc949a33667944e6caa42b4 state=OPEN, ts=1690171889502, server=jenkins-hbase4.apache.org,46039,1690171872278}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-24 04:11:29,519 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=123, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,43611,1690171872392, splitWal=true, meta=false, isMeta: false 2023-07-24 04:11:29,520 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=124, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,46039,1690171872278, splitWal=true, meta=true, isMeta: false 2023-07-24 04:11:29,520 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=122, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,43857,1690171872444, splitWal=true, meta=false, isMeta: false 2023-07-24 04:11:29,521 DEBUG [PEWorker-2] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43611,1690171872392-splitting 2023-07-24 04:11:29,523 WARN [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase4.apache.org,46039,1690171872278/hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4., unknown_server=jenkins-hbase4.apache.org,43857,1690171872444/hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299., unknown_server=jenkins-hbase4.apache.org,43611,1690171872392/hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:29,523 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43611,1690171872392-splitting dir is empty, no logs to split. 2023-07-24 04:11:29,523 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,43611,1690171872392 WAL count=0, meta=false 2023-07-24 04:11:29,524 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46039,1690171872278-splitting dir is empty, no logs to split. 2023-07-24 04:11:29,524 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,46039,1690171872278 WAL count=0, meta=false 2023-07-24 04:11:29,524 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43857,1690171872444-splitting 2023-07-24 04:11:29,525 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43857,1690171872444-splitting dir is empty, no logs to split. 2023-07-24 04:11:29,525 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,43857,1690171872444 WAL count=0, meta=false 2023-07-24 04:11:29,526 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43611,1690171872392-splitting dir is empty, no logs to split. 2023-07-24 04:11:29,526 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,43611,1690171872392 WAL count=0, meta=false 2023-07-24 04:11:29,526 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,43611,1690171872392 WAL splitting is done? wals=0, meta=false 2023-07-24 04:11:29,527 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46039,1690171872278-splitting dir is empty, no logs to split. 2023-07-24 04:11:29,527 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,46039,1690171872278 WAL count=0, meta=false 2023-07-24 04:11:29,527 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,46039,1690171872278 WAL splitting is done? wals=0, meta=false 2023-07-24 04:11:29,528 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43857,1690171872444-splitting dir is empty, no logs to split. 2023-07-24 04:11:29,528 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,43857,1690171872444 WAL count=0, meta=false 2023-07-24 04:11:29,528 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,43857,1690171872444 WAL splitting is done? wals=0, meta=false 2023-07-24 04:11:29,528 INFO [PEWorker-2] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,43611,1690171872392 failed, ignore...File hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43611,1690171872392-splitting does not exist. 2023-07-24 04:11:29,528 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=124, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, ASSIGN}] 2023-07-24 04:11:29,529 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=6aa1ab126d58dcf7d835257119c9304f, ASSIGN}] 2023-07-24 04:11:29,529 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=127, ppid=124, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, ASSIGN 2023-07-24 04:11:29,530 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=127, ppid=124, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 04:11:29,530 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=128, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=6aa1ab126d58dcf7d835257119c9304f, ASSIGN 2023-07-24 04:11:29,530 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,43857,1690171872444 failed, ignore...File hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,43857,1690171872444-splitting does not exist. 2023-07-24 04:11:29,530 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=128, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=6aa1ab126d58dcf7d835257119c9304f, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 04:11:29,530 DEBUG [jenkins-hbase4:37329] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 04:11:29,531 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:11:29,531 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:11:29,531 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:11:29,531 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=129, ppid=122, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=6d1026a50e3a812feaa5fb2336097299, ASSIGN}] 2023-07-24 04:11:29,531 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:11:29,531 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-24 04:11:29,533 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=6aa1ab126d58dcf7d835257119c9304f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:29,533 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=73e1052e9bc949a33667944e6caa42b4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:29,533 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690171889533"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171889533"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171889533"}]},"ts":"1690171889533"} 2023-07-24 04:11:29,533 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690171889533"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171889533"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171889533"}]},"ts":"1690171889533"} 2023-07-24 04:11:29,533 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, ppid=122, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=6d1026a50e3a812feaa5fb2336097299, ASSIGN 2023-07-24 04:11:29,534 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, ppid=122, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=6d1026a50e3a812feaa5fb2336097299, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 04:11:29,535 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=128, state=RUNNABLE; OpenRegionProcedure 6aa1ab126d58dcf7d835257119c9304f, server=jenkins-hbase4.apache.org,46393,1690171884706}] 2023-07-24 04:11:29,536 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=127, state=RUNNABLE; OpenRegionProcedure 73e1052e9bc949a33667944e6caa42b4, server=jenkins-hbase4.apache.org,44573,1690171884749}] 2023-07-24 04:11:29,684 DEBUG [jenkins-hbase4:37329] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 04:11:29,685 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 04:11:29,685 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 04:11:29,685 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 04:11:29,685 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 04:11:29,685 DEBUG [jenkins-hbase4:37329] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 04:11:29,687 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=6d1026a50e3a812feaa5fb2336097299, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:29,687 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690171889686"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171889686"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171889686"}]},"ts":"1690171889686"} 2023-07-24 04:11:29,689 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:29,690 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:11:29,691 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=129, state=RUNNABLE; OpenRegionProcedure 6d1026a50e3a812feaa5fb2336097299, server=jenkins-hbase4.apache.org,46393,1690171884706}] 2023-07-24 04:11:29,691 INFO [RS-EventLoopGroup-16-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37298, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:11:29,699 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:29,699 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6aa1ab126d58dcf7d835257119c9304f, NAME => 'hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:29,699 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 04:11:29,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. service=MultiRowMutationService 2023-07-24 04:11:29,700 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 04:11:29,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:29,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:29,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:29,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:29,701 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:29,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 73e1052e9bc949a33667944e6caa42b4, NAME => 'hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:29,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:29,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:29,702 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:29,702 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:29,711 INFO [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:29,716 INFO [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:29,717 DEBUG [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m 2023-07-24 04:11:29,717 DEBUG [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m 2023-07-24 04:11:29,717 INFO [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6aa1ab126d58dcf7d835257119c9304f columnFamilyName m 2023-07-24 04:11:29,718 DEBUG [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info 2023-07-24 04:11:29,718 DEBUG [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info 2023-07-24 04:11:29,719 INFO [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 73e1052e9bc949a33667944e6caa42b4 columnFamilyName info 2023-07-24 04:11:29,728 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:43611 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43611 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:11:29,729 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:43611 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43611 2023-07-24 04:11:29,732 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=46, started=4160 ms ago, cancelled=false, msg=Call to address=jenkins-hbase4.apache.org/172.31.14.131:43611 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43611, details=row '\x00' on table 'hbase:rsgroup' at region=hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f., hostname=jenkins-hbase4.apache.org,43611,1690171872392, seqNum=71, see https://s.apache.org/timeout, exception=java.net.ConnectException: Call to address=jenkins-hbase4.apache.org/172.31.14.131:43611 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43611 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:186) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:99) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:398) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:368) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:396) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:195) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$300(NettyRpcConnection.java:76) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:296) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:287) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:674) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:693) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43611 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 04:11:29,736 DEBUG [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m/38da92e746f04a2aa60a5af2d1328ab4 2023-07-24 04:11:29,747 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6c036af6e624012bca44b5797bc2af2 2023-07-24 04:11:29,747 DEBUG [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info/c6c036af6e624012bca44b5797bc2af2 2023-07-24 04:11:29,747 INFO [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] regionserver.HStore(310): Store=73e1052e9bc949a33667944e6caa42b4/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:29,749 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:29,750 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:29,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:29,759 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 73e1052e9bc949a33667944e6caa42b4; next sequenceid=18; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9547847040, jitterRate=-0.11078745126724243}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:11:29,759 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 73e1052e9bc949a33667944e6caa42b4: 2023-07-24 04:11:29,760 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4., pid=131, masterSystemTime=1690171889689 2023-07-24 04:11:29,764 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef83161486a0423dbe81bde050027796 2023-07-24 04:11:29,764 DEBUG [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m/ef83161486a0423dbe81bde050027796 2023-07-24 04:11:29,764 INFO [StoreOpener-6aa1ab126d58dcf7d835257119c9304f-1] regionserver.HStore(310): Store=6aa1ab126d58dcf7d835257119c9304f/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:29,766 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:29,768 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:29,768 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:29,768 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=73e1052e9bc949a33667944e6caa42b4, regionState=OPEN, openSeqNum=18, regionLocation=jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:29,769 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690171889768"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171889768"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171889768"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171889768"}]},"ts":"1690171889768"} 2023-07-24 04:11:29,769 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:29,774 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=127 2023-07-24 04:11:29,774 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=127, state=SUCCESS; OpenRegionProcedure 73e1052e9bc949a33667944e6caa42b4, server=jenkins-hbase4.apache.org,44573,1690171884749 in 235 msec 2023-07-24 04:11:29,775 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:29,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=124 2023-07-24 04:11:29,776 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,46039,1690171872278 after splitting done 2023-07-24 04:11:29,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=124, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, ASSIGN in 246 msec 2023-07-24 04:11:29,776 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase4.apache.org,46039,1690171872278 from processing; numProcessing=2 2023-07-24 04:11:29,777 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6aa1ab126d58dcf7d835257119c9304f; next sequenceid=78; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@39604c01, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:11:29,777 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6aa1ab126d58dcf7d835257119c9304f: 2023-07-24 04:11:29,778 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f., pid=130, masterSystemTime=1690171889691 2023-07-24 04:11:29,778 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=124, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,46039,1690171872278, splitWal=true, meta=true in 4.8460 sec 2023-07-24 04:11:29,780 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:29,780 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:29,780 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=6aa1ab126d58dcf7d835257119c9304f, regionState=OPEN, openSeqNum=78, regionLocation=jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:29,780 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690171889780"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171889780"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171889780"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171889780"}]},"ts":"1690171889780"} 2023-07-24 04:11:29,784 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=128 2023-07-24 04:11:29,784 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=128, state=SUCCESS; OpenRegionProcedure 6aa1ab126d58dcf7d835257119c9304f, server=jenkins-hbase4.apache.org,46393,1690171884706 in 247 msec 2023-07-24 04:11:29,785 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=123 2023-07-24 04:11:29,785 INFO [PEWorker-4] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,43611,1690171872392 after splitting done 2023-07-24 04:11:29,785 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=6aa1ab126d58dcf7d835257119c9304f, ASSIGN in 255 msec 2023-07-24 04:11:29,785 DEBUG [PEWorker-4] master.DeadServer(114): Removed jenkins-hbase4.apache.org,43611,1690171872392 from processing; numProcessing=1 2023-07-24 04:11:29,787 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,43611,1690171872392, splitWal=true, meta=false in 4.8570 sec 2023-07-24 04:11:29,858 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:29,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6d1026a50e3a812feaa5fb2336097299, NAME => 'hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:29,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:29,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:29,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:29,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:29,863 INFO [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:29,867 DEBUG [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299/q 2023-07-24 04:11:29,867 DEBUG [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299/q 2023-07-24 04:11:29,868 INFO [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6d1026a50e3a812feaa5fb2336097299 columnFamilyName q 2023-07-24 04:11:29,868 INFO [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] regionserver.HStore(310): Store=6d1026a50e3a812feaa5fb2336097299/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:29,869 INFO [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:29,870 DEBUG [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299/u 2023-07-24 04:11:29,870 DEBUG [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299/u 2023-07-24 04:11:29,870 INFO [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6d1026a50e3a812feaa5fb2336097299 columnFamilyName u 2023-07-24 04:11:29,871 INFO [StoreOpener-6d1026a50e3a812feaa5fb2336097299-1] regionserver.HStore(310): Store=6d1026a50e3a812feaa5fb2336097299/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:29,872 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:29,873 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:29,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-24 04:11:29,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:29,878 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6d1026a50e3a812feaa5fb2336097299; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11834649600, jitterRate=0.1021876335144043}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-24 04:11:29,878 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6d1026a50e3a812feaa5fb2336097299: 2023-07-24 04:11:29,879 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299., pid=132, masterSystemTime=1690171889846 2023-07-24 04:11:29,881 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:29,882 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:29,882 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=6d1026a50e3a812feaa5fb2336097299, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:29,882 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690171889882"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171889882"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171889882"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171889882"}]},"ts":"1690171889882"} 2023-07-24 04:11:29,888 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=129 2023-07-24 04:11:29,888 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=129, state=SUCCESS; OpenRegionProcedure 6d1026a50e3a812feaa5fb2336097299, server=jenkins-hbase4.apache.org,46393,1690171884706 in 194 msec 2023-07-24 04:11:29,891 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=129, resume processing ppid=122 2023-07-24 04:11:29,891 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=129, ppid=122, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=6d1026a50e3a812feaa5fb2336097299, ASSIGN in 357 msec 2023-07-24 04:11:29,891 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,43857,1690171872444 after splitting done 2023-07-24 04:11:29,891 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase4.apache.org,43857,1690171872444 from processing; numProcessing=0 2023-07-24 04:11:29,893 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,43857,1690171872444, splitWal=true, meta=false in 4.9660 sec 2023-07-24 04:11:30,520 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-24 04:11:30,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:30,527 INFO [RS-EventLoopGroup-16-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42508, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:11:30,538 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 04:11:30,541 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 04:11:30,541 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.744sec 2023-07-24 04:11:30,541 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 04:11:30,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 04:11:30,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 04:11:30,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37329,1690171884592-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 04:11:30,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37329,1690171884592-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 04:11:30,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 04:11:30,591 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(139): Connect 0x596ceef6 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:30,596 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@17654e94, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:30,598 DEBUG [hconnection-0x712bff34-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:30,600 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53890, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:11:30,606 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-24 04:11:30,606 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x596ceef6 to 127.0.0.1:59235 2023-07-24 04:11:30,606 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:30,608 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase4.apache.org:37329 after: jenkins-hbase4.apache.org:37329 2023-07-24 04:11:30,608 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(139): Connect 0x078901d2 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:30,612 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@29758c48, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:30,613 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:30,848 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 04:11:30,912 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-24 04:11:31,036 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 04:11:31,037 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 04:11:31,037 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-24 04:11:32,687 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(712): Not transferring queue since we are shutting down 2023-07-24 04:11:33,739 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 04:11:33,739 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 04:11:33,748 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:33,748 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:33,749 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:33,750 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-24 04:11:33,750 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 04:11:33,816 DEBUG [Listener at localhost/41307] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 04:11:33,818 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36676, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 04:11:33,820 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-24 04:11:33,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 04:11:33,821 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(139): Connect 0x2892e3be to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:33,827 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ded8549, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:33,828 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:11:33,831 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:11:33,831 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10195863d980027 connected 2023-07-24 04:11:33,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:33,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:33,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:33,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:33,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:33,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:33,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:33,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:33,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:33,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:33,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:33,845 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 04:11:33,857 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:11:33,857 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:33,857 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:33,857 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:11:33,857 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:33,857 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:11:33,857 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:11:33,858 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33023 2023-07-24 04:11:33,858 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:11:33,860 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:11:33,860 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:33,861 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:33,862 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33023 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:11:33,866 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:330230x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:11:33,868 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33023-0x10195863d980028 connected 2023-07-24 04:11:33,868 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(162): regionserver:33023-0x10195863d980028, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 04:11:33,869 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(162): regionserver:33023-0x10195863d980028, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 04:11:33,870 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:33023-0x10195863d980028, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:11:33,874 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33023 2023-07-24 04:11:33,874 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33023 2023-07-24 04:11:33,874 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33023 2023-07-24 04:11:33,875 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33023 2023-07-24 04:11:33,875 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33023 2023-07-24 04:11:33,877 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:11:33,878 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:11:33,878 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:11:33,878 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:11:33,878 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:11:33,878 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:11:33,879 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:11:33,879 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 37575 2023-07-24 04:11:33,879 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:11:33,886 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:33,886 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7d6dd7e8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:11:33,887 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:33,887 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6fd09d0c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:11:33,892 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:11:33,892 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:11:33,893 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:11:33,893 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 04:11:33,893 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:33,894 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7416edcf{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:33,896 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@5d718bc5{HTTP/1.1, (http/1.1)}{0.0.0.0:37575} 2023-07-24 04:11:33,896 INFO [Listener at localhost/41307] server.Server(415): Started @50195ms 2023-07-24 04:11:33,898 INFO [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:11:33,900 DEBUG [RS:3;jenkins-hbase4:33023] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:11:33,901 DEBUG [RS:3;jenkins-hbase4:33023] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:11:33,902 DEBUG [RS:3;jenkins-hbase4:33023] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:11:33,904 DEBUG [RS:3;jenkins-hbase4:33023] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:11:33,907 DEBUG [RS:3;jenkins-hbase4:33023] zookeeper.ReadOnlyZKClient(139): Connect 0x71a15e14 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:33,914 DEBUG [RS:3;jenkins-hbase4:33023] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f66be50, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:33,914 DEBUG [RS:3;jenkins-hbase4:33023] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b593564, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:11:33,922 DEBUG [RS:3;jenkins-hbase4:33023] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:33023 2023-07-24 04:11:33,922 INFO [RS:3;jenkins-hbase4:33023] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:11:33,922 INFO [RS:3;jenkins-hbase4:33023] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:11:33,922 DEBUG [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:11:33,922 INFO [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37329,1690171884592 with isa=jenkins-hbase4.apache.org/172.31.14.131:33023, startcode=1690171893856 2023-07-24 04:11:33,923 DEBUG [RS:3;jenkins-hbase4:33023] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:11:33,924 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48503, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.11 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:11:33,924 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37329] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:33,924 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:11:33,925 DEBUG [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:11:33,925 DEBUG [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:11:33,925 DEBUG [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45469 2023-07-24 04:11:33,926 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:33,926 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:33,927 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:33,927 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:33,927 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33023,1690171893856] 2023-07-24 04:11:33,927 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:33,927 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:33,927 DEBUG [RS:3;jenkins-hbase4:33023] zookeeper.ZKUtil(162): regionserver:33023-0x10195863d980028, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:33,928 WARN [RS:3;jenkins-hbase4:33023] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:11:33,928 INFO [RS:3;jenkins-hbase4:33023] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:33,928 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 04:11:33,928 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:33,928 DEBUG [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:33,929 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:33,929 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 04:11:33,929 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:33,932 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:33,932 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:33,933 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:33,933 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:33,933 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:33,934 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:33,934 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:33,934 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:33,935 DEBUG [RS:3;jenkins-hbase4:33023] zookeeper.ZKUtil(162): regionserver:33023-0x10195863d980028, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:33,935 DEBUG [RS:3;jenkins-hbase4:33023] zookeeper.ZKUtil(162): regionserver:33023-0x10195863d980028, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:33,936 DEBUG [RS:3;jenkins-hbase4:33023] zookeeper.ZKUtil(162): regionserver:33023-0x10195863d980028, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:33,936 DEBUG [RS:3;jenkins-hbase4:33023] zookeeper.ZKUtil(162): regionserver:33023-0x10195863d980028, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:33,937 DEBUG [RS:3;jenkins-hbase4:33023] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:11:33,937 INFO [RS:3;jenkins-hbase4:33023] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:11:33,939 INFO [RS:3;jenkins-hbase4:33023] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:11:33,941 INFO [RS:3;jenkins-hbase4:33023] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:11:33,941 INFO [RS:3;jenkins-hbase4:33023] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:33,941 INFO [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:11:33,943 INFO [RS:3;jenkins-hbase4:33023] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:33,943 DEBUG [RS:3;jenkins-hbase4:33023] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:33,943 DEBUG [RS:3;jenkins-hbase4:33023] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:33,943 DEBUG [RS:3;jenkins-hbase4:33023] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:33,943 DEBUG [RS:3;jenkins-hbase4:33023] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:33,943 DEBUG [RS:3;jenkins-hbase4:33023] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:33,943 DEBUG [RS:3;jenkins-hbase4:33023] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:11:33,943 DEBUG [RS:3;jenkins-hbase4:33023] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:33,943 DEBUG [RS:3;jenkins-hbase4:33023] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:33,943 DEBUG [RS:3;jenkins-hbase4:33023] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:33,943 DEBUG [RS:3;jenkins-hbase4:33023] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:33,947 INFO [RS:3;jenkins-hbase4:33023] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:33,947 INFO [RS:3;jenkins-hbase4:33023] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:33,947 INFO [RS:3;jenkins-hbase4:33023] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:33,958 INFO [RS:3;jenkins-hbase4:33023] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:11:33,958 INFO [RS:3;jenkins-hbase4:33023] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33023,1690171893856-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:33,969 INFO [RS:3;jenkins-hbase4:33023] regionserver.Replication(203): jenkins-hbase4.apache.org,33023,1690171893856 started 2023-07-24 04:11:33,969 INFO [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33023,1690171893856, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33023, sessionid=0x10195863d980028 2023-07-24 04:11:33,970 DEBUG [RS:3;jenkins-hbase4:33023] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:11:33,970 DEBUG [RS:3;jenkins-hbase4:33023] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:33,970 DEBUG [RS:3;jenkins-hbase4:33023] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33023,1690171893856' 2023-07-24 04:11:33,970 DEBUG [RS:3;jenkins-hbase4:33023] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:11:33,970 DEBUG [RS:3;jenkins-hbase4:33023] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:11:33,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:33,971 DEBUG [RS:3;jenkins-hbase4:33023] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:11:33,971 DEBUG [RS:3;jenkins-hbase4:33023] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:11:33,971 DEBUG [RS:3;jenkins-hbase4:33023] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:33,971 DEBUG [RS:3;jenkins-hbase4:33023] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33023,1690171893856' 2023-07-24 04:11:33,971 DEBUG [RS:3;jenkins-hbase4:33023] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:33,971 DEBUG [RS:3;jenkins-hbase4:33023] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:33,971 DEBUG [RS:3;jenkins-hbase4:33023] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:11:33,971 INFO [RS:3;jenkins-hbase4:33023] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 04:11:33,971 INFO [RS:3;jenkins-hbase4:33023] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 04:11:33,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:33,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:33,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:33,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:33,980 DEBUG [hconnection-0x722584b7-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 04:11:33,982 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53906, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 04:11:33,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:33,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:33,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37329] to rsgroup master 2023-07-24 04:11:33,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37329 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:33,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] ipc.CallRunner(144): callId: 25 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36676 deadline: 1690173093991, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37329 is either offline or it does not exist. 2023-07-24 04:11:33,992 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37329 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor62.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37329 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:33,993 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:33,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:33,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:33,994 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33023, jenkins-hbase4.apache.org:40545, jenkins-hbase4.apache.org:44573, jenkins-hbase4.apache.org:46393], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:33,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:33,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:34,039 INFO [Listener at localhost/41307] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=552 (was 521) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741896_1072, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:42399 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x18bbef2f-SendThread(127.0.0.1:59235) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x7ced5fa0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741897_1073, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData-prefix:jenkins-hbase4.apache.org,37329,1690171884592 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741894_1070, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x18bbef2f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/89307590.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp170873928-2005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x2892e3be sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/89307590.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-185152606_17 at /127.0.0.1:35538 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741896_1072] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x49e53547-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x3d4078ed-SendThread(127.0.0.1:59235) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:2;jenkins-hbase4:44573-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:40545Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741897_1073, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:42399 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp170873928-2006 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:33023 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:42399 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp170873928-2004 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp671106560-1755 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1651579531.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x078901d2-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Session-HouseKeeper-359796de-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2024199210-1748 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741894_1070, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-808629621_17 at /127.0.0.1:44490 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741894_1070] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2008417894_17 at /127.0.0.1:44486 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741893_1069] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-66dbac07-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2110500845-1690 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x4d3eda28-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp352136481-1659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2024199210-1750 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2024199210-1743 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1651579531.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2110500845-1684-acceptor-0@77bee678-ServerConnector@3deddf17{HTTP/1.1, (http/1.1)}{0.0.0.0:36387} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp671106560-1760 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741897_1073, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp671106560-1758-acceptor-0@116b9e61-ServerConnector@26c7bd70{HTTP/1.1, (http/1.1)}{0.0.0.0:41383} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (612341689) connection to localhost/127.0.0.1:42399 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-17-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1872244902-1716 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:37329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741896_1072, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp1872244902-1713 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1651579531.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp352136481-1653-acceptor-0@660f1944-ServerConnector@18454b02{HTTP/1.1, (http/1.1)}{0.0.0.0:45469} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741893_1069, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40563,1690171872205 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:46393-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-185152606_17 at /127.0.0.1:53412 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741896_1072] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe8c6183-metaLookup-shared--pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2110500845-1687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2024199210-1744-acceptor-0@22cfefeb-ServerConnector@4325af24{HTTP/1.1, (http/1.1)}{0.0.0.0:34781} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp352136481-1658 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741893_1069, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x18bbef2f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp170873928-2000-acceptor-0@4d05db3a-ServerConnector@5d718bc5{HTTP/1.1, (http/1.1)}{0.0.0.0:37575} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-513563b0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp671106560-1759 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe8c6183-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-185152606_17 at /127.0.0.1:35546 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741897_1073] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2110500845-1686 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca-prefix:jenkins-hbase4.apache.org,46393,1690171884706.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:44573 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:42399 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp170873928-2002 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp170873928-2003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1872244902-1720 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_274633986_17 at /127.0.0.1:35598 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp671106560-1761 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe8c6183-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2024199210-1745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-212309678_17 at /127.0.0.1:44494 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741895_1071] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1872244902-1718 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x722584b7-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2008417894_17 at /127.0.0.1:53386 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741893_1069] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x3d4078ed sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/89307590.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp352136481-1655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-185152606_17 at /127.0.0.1:44504 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741896_1072] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (612341689) connection to localhost/127.0.0.1:42399 from jenkins.hfs.11 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741895_1071, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2024199210-1746 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-808629621_17 at /127.0.0.1:35514 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741894_1070] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1872244902-1719 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2110500845-1689 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1872244902-1714-acceptor-0@4b09d386-ServerConnector@6d8fca7f{HTTP/1.1, (http/1.1)}{0.0.0.0:34323} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp671106560-1757 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1651579531.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca-prefix:jenkins-hbase4.apache.org,46393,1690171884706 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x2892e3be-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:42399 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x078901d2-SendThread(127.0.0.1:59235) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x7ced5fa0-SendThread(127.0.0.1:59235) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca-prefix:jenkins-hbase4.apache.org,44573,1690171884749 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:46393Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_274633986_17 at /127.0.0.1:55364 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x4d3eda28-SendThread(127.0.0.1:59235) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-185152606_17 at /127.0.0.1:53418 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741897_1073] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.10@localhost:42399 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x7ced5fa0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/89307590.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1872244902-1717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-808629621_17 at /127.0.0.1:53390 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741894_1070] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp352136481-1657 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:33023-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:37329 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp352136481-1654 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca-prefix:jenkins-hbase4.apache.org,40545,1690171884651 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-2ba4089d-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp170873928-1999 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1651579531.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-69188c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x71a15e14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/89307590.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x2892e3be-SendThread(127.0.0.1:59235) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:46393 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2110500845-1688 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741895_1071, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741896_1072, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x49e53547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/89307590.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:40545 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (612341689) connection to localhost/127.0.0.1:42399 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp671106560-1754 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1651579531.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp352136481-1656 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:33023Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x71a15e14-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:44573Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2110500845-1685 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe8c6183-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741895_1071, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2110500845-1683 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1651579531.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:40545-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-212309678_17 at /127.0.0.1:53402 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741895_1071] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741894_1070, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2008417894_17 at /127.0.0.1:35510 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741893_1069] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2024199210-1747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1390451518-172.31.14.131-1690171846162:blk_1073741893_1069, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x722584b7-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x078901d2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/89307590.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp352136481-1652 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1651579531.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x71a15e14-SendThread(127.0.0.1:59235) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-212309678_17 at /127.0.0.1:35524 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741895_1071] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-17-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x3d4078ed-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp170873928-2001 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x4d3eda28 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/89307590.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp671106560-1756 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1651579531.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59235@0x49e53547-SendThread(127.0.0.1:59235) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:1;jenkins-hbase4:46393-shortCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690171884987 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: IPC Client (612341689) connection to localhost/127.0.0.1:42399 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2024199210-1749 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1872244902-1715 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690171884980 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-185152606_17 at /127.0.0.1:44512 [Receiving block BP-1390451518-172.31.14.131-1690171846162:blk_1073741897_1073] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=876 (was 792) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=409 (was 478), ProcessCount=174 (was 176), AvailableMemoryMB=8031 (was 5994) - AvailableMemoryMB LEAK? - 2023-07-24 04:11:34,042 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=552 is superior to 500 2023-07-24 04:11:34,062 INFO [Listener at localhost/41307] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=552, OpenFileDescriptor=876, MaxFileDescriptor=60000, SystemLoadAverage=409, ProcessCount=174, AvailableMemoryMB=8029 2023-07-24 04:11:34,062 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=552 is superior to 500 2023-07-24 04:11:34,062 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(132): testClearDeadServers 2023-07-24 04:11:34,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:34,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:34,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:34,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:34,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:34,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:34,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:34,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:34,074 INFO [RS:3;jenkins-hbase4:33023] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33023%2C1690171893856, suffix=, logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,33023,1690171893856, archiveDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs, maxLogs=32 2023-07-24 04:11:34,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:34,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:34,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:34,084 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 04:11:34,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:34,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:34,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:34,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:34,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:34,096 DEBUG [RS-EventLoopGroup-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK] 2023-07-24 04:11:34,097 DEBUG [RS-EventLoopGroup-17-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK] 2023-07-24 04:11:34,097 DEBUG [RS-EventLoopGroup-17-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK] 2023-07-24 04:11:34,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:34,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:34,099 INFO [RS:3;jenkins-hbase4:33023] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,33023,1690171893856/jenkins-hbase4.apache.org%2C33023%2C1690171893856.1690171894074 2023-07-24 04:11:34,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37329] to rsgroup master 2023-07-24 04:11:34,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37329 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:34,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] ipc.CallRunner(144): callId: 53 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36676 deadline: 1690173094100, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37329 is either offline or it does not exist. 2023-07-24 04:11:34,101 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37329 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor62.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37329 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:34,107 DEBUG [RS:3;jenkins-hbase4:33023] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40837,DS-58a9d1de-bc4f-43ed-9ca7-c67538634a29,DISK], DatanodeInfoWithStorage[127.0.0.1:45555,DS-ab8446e9-2213-4c87-9f0e-7512ee7ce2ac,DISK], DatanodeInfoWithStorage[127.0.0.1:39051,DS-f86e7010-fb25-4a07-b4e5-aece062a8a1a,DISK]] 2023-07-24 04:11:34,108 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:34,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:34,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:34,109 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33023, jenkins-hbase4.apache.org:40545, jenkins-hbase4.apache.org:44573, jenkins-hbase4.apache.org:46393], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:34,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:34,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:34,111 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBasics(214): testClearDeadServers 2023-07-24 04:11:34,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:34,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:34,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testClearDeadServers_1930341267 2023-07-24 04:11:34,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1930341267 2023-07-24 04:11:34,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:34,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:34,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:34,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:34,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:34,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:34,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40545, jenkins-hbase4.apache.org:33023, jenkins-hbase4.apache.org:44573] to rsgroup Group_testClearDeadServers_1930341267 2023-07-24 04:11:34,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1930341267 2023-07-24 04:11:34,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:34,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:34,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:34,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminServer(238): Moving server region 73e1052e9bc949a33667944e6caa42b4, which do not belong to RSGroup Group_testClearDeadServers_1930341267 2023-07-24 04:11:34,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] procedure2.ProcedureExecutor(1029): Stored pid=133, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, REOPEN/MOVE 2023-07-24 04:11:34,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 04:11:34,128 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, REOPEN/MOVE 2023-07-24 04:11:34,129 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=73e1052e9bc949a33667944e6caa42b4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:34,129 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690171894129"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171894129"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171894129"}]},"ts":"1690171894129"} 2023-07-24 04:11:34,130 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=133, state=RUNNABLE; CloseRegionProcedure 73e1052e9bc949a33667944e6caa42b4, server=jenkins-hbase4.apache.org,44573,1690171884749}] 2023-07-24 04:11:34,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:34,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 73e1052e9bc949a33667944e6caa42b4, disabling compactions & flushes 2023-07-24 04:11:34,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:34,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:34,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. after waiting 0 ms 2023-07-24 04:11:34,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:34,292 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/recovered.edits/20.seqid, newMaxSeqId=20, maxSeqId=17 2023-07-24 04:11:34,293 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:34,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 73e1052e9bc949a33667944e6caa42b4: 2023-07-24 04:11:34,293 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 73e1052e9bc949a33667944e6caa42b4 move to jenkins-hbase4.apache.org,46393,1690171884706 record at close sequenceid=18 2023-07-24 04:11:34,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:34,295 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=73e1052e9bc949a33667944e6caa42b4, regionState=CLOSED 2023-07-24 04:11:34,295 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690171894295"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690171894295"}]},"ts":"1690171894295"} 2023-07-24 04:11:34,298 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=133 2023-07-24 04:11:34,298 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=133, state=SUCCESS; CloseRegionProcedure 73e1052e9bc949a33667944e6caa42b4, server=jenkins-hbase4.apache.org,44573,1690171884749 in 167 msec 2023-07-24 04:11:34,298 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=133, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46393,1690171884706; forceNewPlan=false, retain=false 2023-07-24 04:11:34,449 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=73e1052e9bc949a33667944e6caa42b4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:34,449 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690171894449"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690171894449"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690171894449"}]},"ts":"1690171894449"} 2023-07-24 04:11:34,451 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=133, state=RUNNABLE; OpenRegionProcedure 73e1052e9bc949a33667944e6caa42b4, server=jenkins-hbase4.apache.org,46393,1690171884706}] 2023-07-24 04:11:34,607 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:34,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 73e1052e9bc949a33667944e6caa42b4, NAME => 'hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.', STARTKEY => '', ENDKEY => ''} 2023-07-24 04:11:34,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:34,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 04:11:34,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:34,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:34,609 INFO [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:34,609 DEBUG [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info 2023-07-24 04:11:34,610 DEBUG [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info 2023-07-24 04:11:34,610 INFO [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 73e1052e9bc949a33667944e6caa42b4 columnFamilyName info 2023-07-24 04:11:34,616 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6c036af6e624012bca44b5797bc2af2 2023-07-24 04:11:34,616 DEBUG [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] regionserver.HStore(539): loaded hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/info/c6c036af6e624012bca44b5797bc2af2 2023-07-24 04:11:34,616 INFO [StoreOpener-73e1052e9bc949a33667944e6caa42b4-1] regionserver.HStore(310): Store=73e1052e9bc949a33667944e6caa42b4/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 04:11:34,617 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:34,618 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:34,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:34,621 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 73e1052e9bc949a33667944e6caa42b4; next sequenceid=21; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9962941440, jitterRate=-0.0721287727355957}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 04:11:34,621 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 73e1052e9bc949a33667944e6caa42b4: 2023-07-24 04:11:34,622 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4., pid=135, masterSystemTime=1690171894603 2023-07-24 04:11:34,623 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:34,623 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:34,624 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=73e1052e9bc949a33667944e6caa42b4, regionState=OPEN, openSeqNum=21, regionLocation=jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:34,624 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690171894624"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690171894624"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690171894624"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690171894624"}]},"ts":"1690171894624"} 2023-07-24 04:11:34,626 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=133 2023-07-24 04:11:34,627 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=133, state=SUCCESS; OpenRegionProcedure 73e1052e9bc949a33667944e6caa42b4, server=jenkins-hbase4.apache.org,46393,1690171884706 in 174 msec 2023-07-24 04:11:34,628 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=133, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=73e1052e9bc949a33667944e6caa42b4, REOPEN/MOVE in 499 msec 2023-07-24 04:11:35,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] procedure.ProcedureSyncWait(216): waitFor pid=133 2023-07-24 04:11:35,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33023,1690171893856, jenkins-hbase4.apache.org,40545,1690171884651, jenkins-hbase4.apache.org,44573,1690171884749] are moved back to default 2023-07-24 04:11:35,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testClearDeadServers_1930341267 2023-07-24 04:11:35,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:35,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:35,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:35,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testClearDeadServers_1930341267 2023-07-24 04:11:35,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:35,135 DEBUG [Listener at localhost/41307] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 04:11:35,136 INFO [RS-EventLoopGroup-17-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48286, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 04:11:35,137 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33023] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33023,1690171893856' ***** 2023-07-24 04:11:35,137 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33023] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x7d95a6b3 2023-07-24 04:11:35,137 INFO [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:35,140 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:35,141 INFO [RS:3;jenkins-hbase4:33023] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7416edcf{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:35,141 INFO [RS:3;jenkins-hbase4:33023] server.AbstractConnector(383): Stopped ServerConnector@5d718bc5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:35,141 INFO [RS:3;jenkins-hbase4:33023] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:35,142 INFO [RS:3;jenkins-hbase4:33023] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6fd09d0c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:35,143 INFO [RS:3;jenkins-hbase4:33023] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7d6dd7e8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:35,143 INFO [RS:3;jenkins-hbase4:33023] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:11:35,143 INFO [RS:3;jenkins-hbase4:33023] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:11:35,143 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:11:35,143 INFO [RS:3;jenkins-hbase4:33023] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:11:35,144 INFO [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:35,144 DEBUG [RS:3;jenkins-hbase4:33023] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x71a15e14 to 127.0.0.1:59235 2023-07-24 04:11:35,144 DEBUG [RS:3;jenkins-hbase4:33023] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,144 INFO [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33023,1690171893856; all regions closed. 2023-07-24 04:11:35,149 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:35,153 DEBUG [RS:3;jenkins-hbase4:33023] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:35,153 INFO [RS:3;jenkins-hbase4:33023] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33023%2C1690171893856:(num 1690171894074) 2023-07-24 04:11:35,153 DEBUG [RS:3;jenkins-hbase4:33023] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,153 INFO [RS:3;jenkins-hbase4:33023] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:35,154 INFO [RS:3;jenkins-hbase4:33023] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:35,154 INFO [RS:3;jenkins-hbase4:33023] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:11:35,154 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:35,154 INFO [RS:3;jenkins-hbase4:33023] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:11:35,154 INFO [RS:3;jenkins-hbase4:33023] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:11:35,155 INFO [RS:3;jenkins-hbase4:33023] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33023 2023-07-24 04:11:35,157 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:35,157 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,157 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:35,157 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,157 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:33023-0x10195863d980028, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:35,157 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:33023-0x10195863d980028, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,157 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,157 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33023,1690171893856 2023-07-24 04:11:35,157 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,157 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33023,1690171893856] 2023-07-24 04:11:35,157 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33023,1690171893856; numProcessing=1 2023-07-24 04:11:35,159 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,159 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,160 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33023,1690171893856 already deleted, retry=false 2023-07-24 04:11:35,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:35,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:35,160 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,33023,1690171893856 on jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:35,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:35,161 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,33023,1690171893856 znode expired, triggering replicatorRemoved event 2023-07-24 04:11:35,161 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,33023,1690171893856 znode expired, triggering replicatorRemoved event 2023-07-24 04:11:35,161 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,161 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,33023,1690171893856 znode expired, triggering replicatorRemoved event 2023-07-24 04:11:35,161 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,161 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:35,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:35,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:35,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,163 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=136, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,33023,1690171893856, splitWal=true, meta=false 2023-07-24 04:11:35,163 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=136 for jenkins-hbase4.apache.org,33023,1690171893856 (carryingMeta=false) jenkins-hbase4.apache.org,33023,1690171893856/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@25c91e9a[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 04:11:35,164 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:11:35,164 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=136, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,33023,1690171893856, splitWal=true, meta=false 2023-07-24 04:11:35,165 INFO [PEWorker-1] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,33023,1690171893856 had 0 regions 2023-07-24 04:11:35,166 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=136, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,33023,1690171893856, splitWal=true, meta=false, isMeta: false 2023-07-24 04:11:35,167 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1930341267 2023-07-24 04:11:35,167 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:35,167 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:35,168 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,33023,1690171893856-splitting 2023-07-24 04:11:35,168 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:35,168 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,33023,1690171893856-splitting dir is empty, no logs to split. 2023-07-24 04:11:35,168 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,33023,1690171893856 WAL count=0, meta=false 2023-07-24 04:11:35,169 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 04:11:35,170 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,33023,1690171893856-splitting dir is empty, no logs to split. 2023-07-24 04:11:35,170 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,33023,1690171893856 WAL count=0, meta=false 2023-07-24 04:11:35,170 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,33023,1690171893856 WAL splitting is done? wals=0, meta=false 2023-07-24 04:11:35,172 INFO [PEWorker-1] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,33023,1690171893856 failed, ignore...File hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,33023,1690171893856-splitting does not exist. 2023-07-24 04:11:35,173 INFO [PEWorker-1] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,33023,1690171893856 after splitting done 2023-07-24 04:11:35,173 DEBUG [PEWorker-1] master.DeadServer(114): Removed jenkins-hbase4.apache.org,33023,1690171893856 from processing; numProcessing=0 2023-07-24 04:11:35,174 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,33023,1690171893856, splitWal=true, meta=false in 13 msec 2023-07-24 04:11:35,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(2362): Client=jenkins//172.31.14.131 clear dead region servers. 2023-07-24 04:11:35,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1930341267 2023-07-24 04:11:35,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:35,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:35,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:35,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminServer(609): Remove decommissioned servers [jenkins-hbase4.apache.org:33023] from RSGroup done 2023-07-24 04:11:35,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testClearDeadServers_1930341267 2023-07-24 04:11:35,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:35,256 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44573] ipc.CallRunner(144): callId: 67 service: ClientService methodName: Scan size: 146 connection: 172.31.14.131:42508 deadline: 1690171955256, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46393 startCode=1690171884706. As of locationSeqNum=18. 2023-07-24 04:11:35,345 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:33023-0x10195863d980028, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:35,345 INFO [RS:3;jenkins-hbase4:33023] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33023,1690171893856; zookeeper connection closed. 2023-07-24 04:11:35,345 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:33023-0x10195863d980028, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:35,346 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@12999f47] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@12999f47 2023-07-24 04:11:35,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:35,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:35,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:35,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:35,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:35,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40545, jenkins-hbase4.apache.org:44573] to rsgroup default 2023-07-24 04:11:35,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1930341267 2023-07-24 04:11:35,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:35,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:35,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 04:11:35,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testClearDeadServers_1930341267, current retry=0 2023-07-24 04:11:35,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40545,1690171884651, jenkins-hbase4.apache.org,44573,1690171884749] are moved back to Group_testClearDeadServers_1930341267 2023-07-24 04:11:35,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testClearDeadServers_1930341267 => default 2023-07-24 04:11:35,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:35,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testClearDeadServers_1930341267 2023-07-24 04:11:35,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:35,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:35,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 04:11:35,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:35,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 04:11:35,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 04:11:35,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 04:11:35,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 04:11:35,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 04:11:35,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 04:11:35,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:35,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 04:11:35,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 04:11:35,388 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 04:11:35,406 INFO [Listener at localhost/41307] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 04:11:35,406 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:35,406 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:35,407 INFO [Listener at localhost/41307] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 04:11:35,407 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 04:11:35,407 INFO [Listener at localhost/41307] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 04:11:35,407 INFO [Listener at localhost/41307] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 04:11:35,408 INFO [Listener at localhost/41307] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41363 2023-07-24 04:11:35,408 INFO [Listener at localhost/41307] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 04:11:35,412 DEBUG [Listener at localhost/41307] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 04:11:35,413 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:35,414 INFO [Listener at localhost/41307] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 04:11:35,415 INFO [Listener at localhost/41307] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41363 connecting to ZooKeeper ensemble=127.0.0.1:59235 2023-07-24 04:11:35,421 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:413630x0, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 04:11:35,423 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41363-0x10195863d98002a connected 2023-07-24 04:11:35,423 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(162): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 04:11:35,424 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(162): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 04:11:35,424 DEBUG [Listener at localhost/41307] zookeeper.ZKUtil(164): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 04:11:35,425 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41363 2023-07-24 04:11:35,425 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41363 2023-07-24 04:11:35,425 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41363 2023-07-24 04:11:35,425 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41363 2023-07-24 04:11:35,425 DEBUG [Listener at localhost/41307] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41363 2023-07-24 04:11:35,427 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 04:11:35,428 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 04:11:35,428 INFO [Listener at localhost/41307] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 04:11:35,428 INFO [Listener at localhost/41307] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 04:11:35,428 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 04:11:35,428 INFO [Listener at localhost/41307] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 04:11:35,429 INFO [Listener at localhost/41307] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 04:11:35,429 INFO [Listener at localhost/41307] http.HttpServer(1146): Jetty bound to port 37017 2023-07-24 04:11:35,429 INFO [Listener at localhost/41307] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 04:11:35,431 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:35,431 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1215a45c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,AVAILABLE} 2023-07-24 04:11:35,431 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:35,432 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@499999a5{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 04:11:35,437 INFO [Listener at localhost/41307] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 04:11:35,438 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 04:11:35,439 INFO [Listener at localhost/41307] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 04:11:35,439 INFO [Listener at localhost/41307] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 04:11:35,441 INFO [Listener at localhost/41307] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 04:11:35,442 INFO [Listener at localhost/41307] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@41fff0b1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:35,443 INFO [Listener at localhost/41307] server.AbstractConnector(333): Started ServerConnector@5276b624{HTTP/1.1, (http/1.1)}{0.0.0.0:37017} 2023-07-24 04:11:35,443 INFO [Listener at localhost/41307] server.Server(415): Started @51742ms 2023-07-24 04:11:35,449 INFO [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer(951): ClusterId : be768ff7-bd00-4986-93b9-7f0c7f45a7c1 2023-07-24 04:11:35,450 DEBUG [RS:4;jenkins-hbase4:41363] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 04:11:35,452 DEBUG [RS:4;jenkins-hbase4:41363] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 04:11:35,452 DEBUG [RS:4;jenkins-hbase4:41363] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 04:11:35,453 DEBUG [RS:4;jenkins-hbase4:41363] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 04:11:35,454 DEBUG [RS:4;jenkins-hbase4:41363] zookeeper.ReadOnlyZKClient(139): Connect 0x09d6f9f8 to 127.0.0.1:59235 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 04:11:35,457 DEBUG [RS:4;jenkins-hbase4:41363] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71116b2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 04:11:35,457 DEBUG [RS:4;jenkins-hbase4:41363] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@32e9288, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:11:35,470 DEBUG [RS:4;jenkins-hbase4:41363] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase4:41363 2023-07-24 04:11:35,470 INFO [RS:4;jenkins-hbase4:41363] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 04:11:35,470 INFO [RS:4;jenkins-hbase4:41363] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 04:11:35,470 DEBUG [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 04:11:35,471 INFO [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37329,1690171884592 with isa=jenkins-hbase4.apache.org/172.31.14.131:41363, startcode=1690171895405 2023-07-24 04:11:35,471 DEBUG [RS:4;jenkins-hbase4:41363] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 04:11:35,472 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34831, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.12 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 04:11:35,473 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37329] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,473 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 04:11:35,473 DEBUG [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca 2023-07-24 04:11:35,473 DEBUG [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42399 2023-07-24 04:11:35,473 DEBUG [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45469 2023-07-24 04:11:35,476 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,476 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,476 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,476 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,476 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:35,476 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 04:11:35,476 DEBUG [RS:4;jenkins-hbase4:41363] zookeeper.ZKUtil(162): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,476 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41363,1690171895405] 2023-07-24 04:11:35,477 WARN [RS:4;jenkins-hbase4:41363] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 04:11:35,477 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,477 INFO [RS:4;jenkins-hbase4:41363] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 04:11:35,477 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,477 DEBUG [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,477 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,479 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:35,479 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:35,480 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:35,481 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37329,1690171884592] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 04:11:35,481 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,482 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,482 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,482 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,484 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,484 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,487 DEBUG [RS:4;jenkins-hbase4:41363] zookeeper.ZKUtil(162): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,487 DEBUG [RS:4;jenkins-hbase4:41363] zookeeper.ZKUtil(162): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:35,487 DEBUG [RS:4;jenkins-hbase4:41363] zookeeper.ZKUtil(162): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,487 DEBUG [RS:4;jenkins-hbase4:41363] zookeeper.ZKUtil(162): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,488 DEBUG [RS:4;jenkins-hbase4:41363] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 04:11:35,488 INFO [RS:4;jenkins-hbase4:41363] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 04:11:35,489 INFO [RS:4;jenkins-hbase4:41363] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 04:11:35,490 INFO [RS:4;jenkins-hbase4:41363] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 04:11:35,490 INFO [RS:4;jenkins-hbase4:41363] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:35,490 INFO [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 04:11:35,491 INFO [RS:4;jenkins-hbase4:41363] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:35,492 DEBUG [RS:4;jenkins-hbase4:41363] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:35,492 DEBUG [RS:4;jenkins-hbase4:41363] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:35,492 DEBUG [RS:4;jenkins-hbase4:41363] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:35,492 DEBUG [RS:4;jenkins-hbase4:41363] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:35,492 DEBUG [RS:4;jenkins-hbase4:41363] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:35,492 DEBUG [RS:4;jenkins-hbase4:41363] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 04:11:35,492 DEBUG [RS:4;jenkins-hbase4:41363] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:35,492 DEBUG [RS:4;jenkins-hbase4:41363] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:35,492 DEBUG [RS:4;jenkins-hbase4:41363] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:35,492 DEBUG [RS:4;jenkins-hbase4:41363] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 04:11:35,498 INFO [RS:4;jenkins-hbase4:41363] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:35,498 INFO [RS:4;jenkins-hbase4:41363] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:35,499 INFO [RS:4;jenkins-hbase4:41363] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:35,511 INFO [RS:4;jenkins-hbase4:41363] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 04:11:35,511 INFO [RS:4;jenkins-hbase4:41363] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41363,1690171895405-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 04:11:35,522 INFO [RS:4;jenkins-hbase4:41363] regionserver.Replication(203): jenkins-hbase4.apache.org,41363,1690171895405 started 2023-07-24 04:11:35,522 INFO [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41363,1690171895405, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41363, sessionid=0x10195863d98002a 2023-07-24 04:11:35,522 DEBUG [RS:4;jenkins-hbase4:41363] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 04:11:35,522 DEBUG [RS:4;jenkins-hbase4:41363] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,522 DEBUG [RS:4;jenkins-hbase4:41363] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41363,1690171895405' 2023-07-24 04:11:35,522 DEBUG [RS:4;jenkins-hbase4:41363] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 04:11:35,523 DEBUG [RS:4;jenkins-hbase4:41363] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 04:11:35,523 DEBUG [RS:4;jenkins-hbase4:41363] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 04:11:35,523 DEBUG [RS:4;jenkins-hbase4:41363] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 04:11:35,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 04:11:35,523 DEBUG [RS:4;jenkins-hbase4:41363] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,523 DEBUG [RS:4;jenkins-hbase4:41363] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41363,1690171895405' 2023-07-24 04:11:35,523 DEBUG [RS:4;jenkins-hbase4:41363] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 04:11:35,524 DEBUG [RS:4;jenkins-hbase4:41363] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 04:11:35,524 DEBUG [RS:4;jenkins-hbase4:41363] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 04:11:35,524 INFO [RS:4;jenkins-hbase4:41363] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 04:11:35,524 INFO [RS:4;jenkins-hbase4:41363] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 04:11:35,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 04:11:35,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 04:11:35,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 04:11:35,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 04:11:35,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:35,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:35,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37329] to rsgroup master 2023-07-24 04:11:35,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37329 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 04:11:35,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] ipc.CallRunner(144): callId: 104 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36676 deadline: 1690173095534, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37329 is either offline or it does not exist. 2023-07-24 04:11:35,535 WARN [Listener at localhost/41307] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37329 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor62.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37329 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 04:11:35,538 INFO [Listener at localhost/41307] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 04:11:35,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 04:11:35,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 04:11:35,540 INFO [Listener at localhost/41307] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:40545, jenkins-hbase4.apache.org:41363, jenkins-hbase4.apache.org:44573, jenkins-hbase4.apache.org:46393], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 04:11:35,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 04:11:35,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37329] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 04:11:35,569 INFO [Listener at localhost/41307] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=565 (was 552) - Thread LEAK? -, OpenFileDescriptor=855 (was 876), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=409 (was 409), ProcessCount=174 (was 174), AvailableMemoryMB=8022 (was 8029) 2023-07-24 04:11:35,569 WARN [Listener at localhost/41307] hbase.ResourceChecker(130): Thread=565 is superior to 500 2023-07-24 04:11:35,569 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 04:11:35,569 INFO [Listener at localhost/41307] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 04:11:35,570 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x078901d2 to 127.0.0.1:59235 2023-07-24 04:11:35,570 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,570 DEBUG [Listener at localhost/41307] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 04:11:35,570 DEBUG [Listener at localhost/41307] util.JVMClusterUtil(257): Found active master hash=600793103, stopped=false 2023-07-24 04:11:35,570 DEBUG [Listener at localhost/41307] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 04:11:35,570 DEBUG [Listener at localhost/41307] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 04:11:35,570 INFO [Listener at localhost/41307] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:35,573 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:35,573 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:35,573 INFO [Listener at localhost/41307] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 04:11:35,573 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:35,574 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:11:35,573 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:35,573 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 04:11:35,575 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:35,576 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:35,576 DEBUG [Listener at localhost/41307] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7ced5fa0 to 127.0.0.1:59235 2023-07-24 04:11:35,576 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:35,576 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:35,576 DEBUG [Listener at localhost/41307] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,576 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 04:11:35,576 INFO [Listener at localhost/41307] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40545,1690171884651' ***** 2023-07-24 04:11:35,576 INFO [Listener at localhost/41307] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 04:11:35,576 INFO [Listener at localhost/41307] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46393,1690171884706' ***** 2023-07-24 04:11:35,576 INFO [Listener at localhost/41307] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 04:11:35,577 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:35,577 INFO [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:35,577 INFO [Listener at localhost/41307] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44573,1690171884749' ***** 2023-07-24 04:11:35,577 INFO [Listener at localhost/41307] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 04:11:35,577 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:35,577 INFO [Listener at localhost/41307] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41363,1690171895405' ***** 2023-07-24 04:11:35,577 INFO [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:35,578 INFO [Listener at localhost/41307] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 04:11:35,578 INFO [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:35,581 INFO [RS:0;jenkins-hbase4:40545] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1494140{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:35,581 INFO [RS:1;jenkins-hbase4:46393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3f905ed2{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:35,582 INFO [RS:0;jenkins-hbase4:40545] server.AbstractConnector(383): Stopped ServerConnector@3deddf17{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:35,582 INFO [RS:2;jenkins-hbase4:44573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5044b346{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:35,582 INFO [RS:0;jenkins-hbase4:40545] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:35,582 INFO [RS:1;jenkins-hbase4:46393] server.AbstractConnector(383): Stopped ServerConnector@6d8fca7f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:35,584 INFO [RS:1;jenkins-hbase4:46393] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:35,584 INFO [RS:2;jenkins-hbase4:44573] server.AbstractConnector(383): Stopped ServerConnector@4325af24{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:35,585 INFO [RS:0;jenkins-hbase4:40545] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6dad5212{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:35,586 INFO [RS:2;jenkins-hbase4:44573] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:35,586 INFO [RS:1;jenkins-hbase4:46393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2a617046{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:35,587 INFO [RS:0;jenkins-hbase4:40545] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@678c6f48{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:35,586 INFO [RS:4;jenkins-hbase4:41363] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@41fff0b1{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 04:11:35,589 INFO [RS:1;jenkins-hbase4:46393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@11af7c60{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:35,588 INFO [RS:2;jenkins-hbase4:44573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@67a2054{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:35,589 INFO [RS:0;jenkins-hbase4:40545] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:11:35,590 INFO [RS:0;jenkins-hbase4:40545] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:11:35,590 INFO [RS:0;jenkins-hbase4:40545] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:11:35,590 INFO [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,590 DEBUG [RS:0;jenkins-hbase4:40545] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x49e53547 to 127.0.0.1:59235 2023-07-24 04:11:35,590 INFO [RS:2;jenkins-hbase4:44573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@48ab91ed{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:35,591 INFO [RS:4;jenkins-hbase4:41363] server.AbstractConnector(383): Stopped ServerConnector@5276b624{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:35,591 INFO [RS:4;jenkins-hbase4:41363] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:35,590 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:11:35,590 DEBUG [RS:0;jenkins-hbase4:40545] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,593 INFO [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40545,1690171884651; all regions closed. 2023-07-24 04:11:35,592 INFO [RS:4;jenkins-hbase4:41363] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@499999a5{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:35,593 INFO [RS:1;jenkins-hbase4:46393] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:11:35,593 INFO [RS:1;jenkins-hbase4:46393] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:11:35,593 INFO [RS:1;jenkins-hbase4:46393] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:11:35,593 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(3305): Received CLOSE for 6d1026a50e3a812feaa5fb2336097299 2023-07-24 04:11:35,593 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:11:35,593 INFO [RS:2;jenkins-hbase4:44573] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:11:35,595 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(3305): Received CLOSE for 6aa1ab126d58dcf7d835257119c9304f 2023-07-24 04:11:35,603 INFO [RS:4;jenkins-hbase4:41363] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1215a45c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:35,602 INFO [RS:2;jenkins-hbase4:44573] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:11:35,603 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:11:35,602 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 04:11:35,603 INFO [RS:2;jenkins-hbase4:44573] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:11:35,603 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:35,603 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(3305): Received CLOSE for 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:35,604 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:35,604 DEBUG [RS:1;jenkins-hbase4:46393] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x18bbef2f to 127.0.0.1:59235 2023-07-24 04:11:35,604 DEBUG [RS:1;jenkins-hbase4:46393] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,604 INFO [RS:1;jenkins-hbase4:46393] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:11:35,604 INFO [RS:1;jenkins-hbase4:46393] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:11:35,604 INFO [RS:1;jenkins-hbase4:46393] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:11:35,604 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 04:11:35,604 INFO [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,605 DEBUG [RS:2;jenkins-hbase4:44573] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3d4078ed to 127.0.0.1:59235 2023-07-24 04:11:35,605 DEBUG [RS:2;jenkins-hbase4:44573] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,605 INFO [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44573,1690171884749; all regions closed. 2023-07-24 04:11:35,605 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-24 04:11:35,605 DEBUG [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1478): Online Regions={6d1026a50e3a812feaa5fb2336097299=hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299., 6aa1ab126d58dcf7d835257119c9304f=hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f., 1588230740=hbase:meta,,1.1588230740, 73e1052e9bc949a33667944e6caa42b4=hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4.} 2023-07-24 04:11:35,605 DEBUG [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1504): Waiting on 1588230740, 6aa1ab126d58dcf7d835257119c9304f, 6d1026a50e3a812feaa5fb2336097299, 73e1052e9bc949a33667944e6caa42b4 2023-07-24 04:11:35,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6d1026a50e3a812feaa5fb2336097299, disabling compactions & flushes 2023-07-24 04:11:35,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:35,606 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:35,606 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. after waiting 0 ms 2023-07-24 04:11:35,606 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 04:11:35,606 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 04:11:35,606 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:35,606 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 04:11:35,608 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 04:11:35,608 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 04:11:35,607 INFO [RS:4;jenkins-hbase4:41363] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 04:11:35,610 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.28 KB heapSize=7.76 KB 2023-07-24 04:11:35,615 INFO [RS:4;jenkins-hbase4:41363] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 04:11:35,615 INFO [RS:4;jenkins-hbase4:41363] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 04:11:35,615 INFO [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,615 DEBUG [RS:4;jenkins-hbase4:41363] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x09d6f9f8 to 127.0.0.1:59235 2023-07-24 04:11:35,615 DEBUG [RS:4;jenkins-hbase4:41363] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,615 INFO [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41363,1690171895405; all regions closed. 2023-07-24 04:11:35,615 DEBUG [RS:4;jenkins-hbase4:41363] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,615 INFO [RS:4;jenkins-hbase4:41363] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:35,628 DEBUG [RS:0;jenkins-hbase4:40545] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:35,628 INFO [RS:0;jenkins-hbase4:40545] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40545%2C1690171884651:(num 1690171885184) 2023-07-24 04:11:35,628 DEBUG [RS:0;jenkins-hbase4:40545] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,628 INFO [RS:0;jenkins-hbase4:40545] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:35,629 DEBUG [RS:2;jenkins-hbase4:44573] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:35,629 INFO [RS:2;jenkins-hbase4:44573] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44573%2C1690171884749:(num 1690171885190) 2023-07-24 04:11:35,629 DEBUG [RS:2;jenkins-hbase4:44573] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,629 INFO [RS:2;jenkins-hbase4:44573] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:35,639 INFO [RS:2;jenkins-hbase4:44573] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:35,639 INFO [RS:4;jenkins-hbase4:41363] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:35,639 INFO [RS:2;jenkins-hbase4:44573] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:11:35,639 INFO [RS:2;jenkins-hbase4:44573] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:11:35,639 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:35,639 INFO [RS:4;jenkins-hbase4:41363] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:11:35,639 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:35,639 INFO [RS:2;jenkins-hbase4:44573] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:11:35,639 INFO [RS:4;jenkins-hbase4:41363] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:11:35,640 INFO [RS:4;jenkins-hbase4:41363] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:11:35,641 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/quota/6d1026a50e3a812feaa5fb2336097299/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 04:11:35,640 INFO [RS:2;jenkins-hbase4:44573] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44573 2023-07-24 04:11:35,641 INFO [RS:0;jenkins-hbase4:40545] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:35,641 INFO [RS:0;jenkins-hbase4:40545] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 04:11:35,641 INFO [RS:0;jenkins-hbase4:40545] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 04:11:35,641 INFO [RS:0;jenkins-hbase4:40545] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 04:11:35,641 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:35,642 INFO [RS:0;jenkins-hbase4:40545] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40545 2023-07-24 04:11:35,644 INFO [RS:4;jenkins-hbase4:41363] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41363 2023-07-24 04:11:35,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:35,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6d1026a50e3a812feaa5fb2336097299: 2023-07-24 04:11:35,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690171878315.6d1026a50e3a812feaa5fb2336097299. 2023-07-24 04:11:35,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6aa1ab126d58dcf7d835257119c9304f, disabling compactions & flushes 2023-07-24 04:11:35,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:35,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:35,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. after waiting 0 ms 2023-07-24 04:11:35,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:35,647 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 6aa1ab126d58dcf7d835257119c9304f 1/1 column families, dataSize=4.27 KB heapSize=7.02 KB 2023-07-24 04:11:35,663 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.28 KB at sequenceid=173 (bloomFilter=false), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/info/5ae48eb1441942a6b5cb7381474467eb 2023-07-24 04:11:35,667 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:35,667 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:35,669 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/.tmp/info/5ae48eb1441942a6b5cb7381474467eb as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/5ae48eb1441942a6b5cb7381474467eb 2023-07-24 04:11:35,674 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.27 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/.tmp/m/4b85b706140c4208b62f173deb6c5c11 2023-07-24 04:11:35,677 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/5ae48eb1441942a6b5cb7381474467eb, entries=31, sequenceid=173, filesize=8.3 K 2023-07-24 04:11:35,678 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.28 KB/4384, heapSize ~7.24 KB/7416, currentSize=0 B/0 for 1588230740 in 69ms, sequenceid=173, compaction requested=false 2023-07-24 04:11:35,685 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/ea6a294b028040dcb802cfd24f5c7162, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/35ed1307fbf449eea8d4667880d2c6b7, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/de3310ea376f4898b8ea51fb19fe2f72] to archive 2023-07-24 04:11:35,685 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4b85b706140c4208b62f173deb6c5c11 2023-07-24 04:11:35,690 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-24 04:11:35,692 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/.tmp/m/4b85b706140c4208b62f173deb6c5c11 as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m/4b85b706140c4208b62f173deb6c5c11 2023-07-24 04:11:35,695 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/ea6a294b028040dcb802cfd24f5c7162 to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/hbase/meta/1588230740/info/ea6a294b028040dcb802cfd24f5c7162 2023-07-24 04:11:35,697 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/35ed1307fbf449eea8d4667880d2c6b7 to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/hbase/meta/1588230740/info/35ed1307fbf449eea8d4667880d2c6b7 2023-07-24 04:11:35,697 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,697 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,697 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,697 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,697 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,697 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,697 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,697 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,697 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,697 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,698 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,698 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,698 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,698 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41363,1690171895405 2023-07-24 04:11:35,698 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,698 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44573,1690171884749 2023-07-24 04:11:35,698 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40545,1690171884651 2023-07-24 04:11:35,699 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40545,1690171884651] 2023-07-24 04:11:35,699 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40545,1690171884651; numProcessing=1 2023-07-24 04:11:35,700 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/info/de3310ea376f4898b8ea51fb19fe2f72 to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/hbase/meta/1588230740/info/de3310ea376f4898b8ea51fb19fe2f72 2023-07-24 04:11:35,705 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40545,1690171884651 already deleted, retry=false 2023-07-24 04:11:35,705 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40545,1690171884651 expired; onlineServers=3 2023-07-24 04:11:35,705 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44573,1690171884749] 2023-07-24 04:11:35,705 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44573,1690171884749; numProcessing=2 2023-07-24 04:11:35,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4b85b706140c4208b62f173deb6c5c11 2023-07-24 04:11:35,708 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/m/4b85b706140c4208b62f173deb6c5c11, entries=6, sequenceid=95, filesize=5.4 K 2023-07-24 04:11:35,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.27 KB/4376, heapSize ~7.01 KB/7176, currentSize=0 B/0 for 6aa1ab126d58dcf7d835257119c9304f in 62ms, sequenceid=95, compaction requested=true 2023-07-24 04:11:35,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/rsgroup/6aa1ab126d58dcf7d835257119c9304f/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=77 2023-07-24 04:11:35,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 04:11:35,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:35,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6aa1ab126d58dcf7d835257119c9304f: 2023-07-24 04:11:35,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690171855069.6aa1ab126d58dcf7d835257119c9304f. 2023-07-24 04:11:35,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 73e1052e9bc949a33667944e6caa42b4, disabling compactions & flushes 2023-07-24 04:11:35,728 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:35,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:35,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. after waiting 0 ms 2023-07-24 04:11:35,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:35,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/namespace/73e1052e9bc949a33667944e6caa42b4/recovered.edits/23.seqid, newMaxSeqId=23, maxSeqId=20 2023-07-24 04:11:35,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:35,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 73e1052e9bc949a33667944e6caa42b4: 2023-07-24 04:11:35,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690171854770.73e1052e9bc949a33667944e6caa42b4. 2023-07-24 04:11:35,753 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/e673da21eba54a61b6fc1007d80762bf, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/763c92d71bbe40558f4f7141fc340072, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/d0bd780de0244b1c96eaeb749f1d60a0] to archive 2023-07-24 04:11:35,754 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-24 04:11:35,756 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/e673da21eba54a61b6fc1007d80762bf to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/hbase/meta/1588230740/table/e673da21eba54a61b6fc1007d80762bf 2023-07-24 04:11:35,757 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/763c92d71bbe40558f4f7141fc340072 to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/hbase/meta/1588230740/table/763c92d71bbe40558f4f7141fc340072 2023-07-24 04:11:35,758 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/table/d0bd780de0244b1c96eaeb749f1d60a0 to hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/archive/data/hbase/meta/1588230740/table/d0bd780de0244b1c96eaeb749f1d60a0 2023-07-24 04:11:35,763 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/data/hbase/meta/1588230740/recovered.edits/176.seqid, newMaxSeqId=176, maxSeqId=158 2023-07-24 04:11:35,764 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 04:11:35,764 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 04:11:35,764 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 04:11:35,765 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 04:11:35,800 INFO [RS:4;jenkins-hbase4:41363] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41363,1690171895405; zookeeper connection closed. 2023-07-24 04:11:35,800 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:35,800 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:41363-0x10195863d98002a, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:35,800 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@db56acf] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@db56acf 2023-07-24 04:11:35,801 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44573,1690171884749 already deleted, retry=false 2023-07-24 04:11:35,801 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44573,1690171884749 expired; onlineServers=2 2023-07-24 04:11:35,801 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41363,1690171895405] 2023-07-24 04:11:35,801 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41363,1690171895405; numProcessing=3 2023-07-24 04:11:35,802 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41363,1690171895405 already deleted, retry=false 2023-07-24 04:11:35,802 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41363,1690171895405 expired; onlineServers=1 2023-07-24 04:11:35,806 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46393,1690171884706; all regions closed. 2023-07-24 04:11:35,812 DEBUG [RS:1;jenkins-hbase4:46393] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:35,812 INFO [RS:1;jenkins-hbase4:46393] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46393%2C1690171884706.meta:.meta(num 1690171885321) 2023-07-24 04:11:35,815 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/WALs/jenkins-hbase4.apache.org,46393,1690171884706/jenkins-hbase4.apache.org%2C46393%2C1690171884706.1690171885196 not finished, retry = 0 2023-07-24 04:11:35,873 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:35,873 INFO [RS:2;jenkins-hbase4:44573] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44573,1690171884749; zookeeper connection closed. 2023-07-24 04:11:35,873 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:44573-0x10195863d98001f, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:35,873 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@51b877f1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@51b877f1 2023-07-24 04:11:35,918 DEBUG [RS:1;jenkins-hbase4:46393] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/oldWALs 2023-07-24 04:11:35,918 INFO [RS:1;jenkins-hbase4:46393] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46393%2C1690171884706:(num 1690171885196) 2023-07-24 04:11:35,918 DEBUG [RS:1;jenkins-hbase4:46393] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,918 INFO [RS:1;jenkins-hbase4:46393] regionserver.LeaseManager(133): Closed leases 2023-07-24 04:11:35,918 INFO [RS:1;jenkins-hbase4:46393] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 04:11:35,918 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:35,919 INFO [RS:1;jenkins-hbase4:46393] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46393 2023-07-24 04:11:35,922 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46393,1690171884706 2023-07-24 04:11:35,922 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 04:11:35,924 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46393,1690171884706] 2023-07-24 04:11:35,924 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46393,1690171884706; numProcessing=4 2023-07-24 04:11:35,925 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46393,1690171884706 already deleted, retry=false 2023-07-24 04:11:35,925 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46393,1690171884706 expired; onlineServers=0 2023-07-24 04:11:35,925 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37329,1690171884592' ***** 2023-07-24 04:11:35,925 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 04:11:35,925 DEBUG [M:0;jenkins-hbase4:37329] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@27f0f0ae, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 04:11:35,925 INFO [M:0;jenkins-hbase4:37329] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 04:11:35,927 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 04:11:35,927 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 04:11:35,927 INFO [M:0;jenkins-hbase4:37329] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@262b79d3{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 04:11:35,927 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 04:11:35,928 INFO [M:0;jenkins-hbase4:37329] server.AbstractConnector(383): Stopped ServerConnector@18454b02{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:35,928 INFO [M:0;jenkins-hbase4:37329] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 04:11:35,928 INFO [M:0;jenkins-hbase4:37329] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6ec8d0d4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 04:11:35,929 INFO [M:0;jenkins-hbase4:37329] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@18ea3f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/hadoop.log.dir/,STOPPED} 2023-07-24 04:11:35,929 INFO [M:0;jenkins-hbase4:37329] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37329,1690171884592 2023-07-24 04:11:35,929 INFO [M:0;jenkins-hbase4:37329] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37329,1690171884592; all regions closed. 2023-07-24 04:11:35,929 DEBUG [M:0;jenkins-hbase4:37329] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 04:11:35,929 INFO [M:0;jenkins-hbase4:37329] master.HMaster(1491): Stopping master jetty server 2023-07-24 04:11:35,930 INFO [M:0;jenkins-hbase4:37329] server.AbstractConnector(383): Stopped ServerConnector@26c7bd70{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 04:11:35,930 DEBUG [M:0;jenkins-hbase4:37329] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 04:11:35,930 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 04:11:35,930 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690171884980] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690171884980,5,FailOnTimeoutGroup] 2023-07-24 04:11:35,930 DEBUG [M:0;jenkins-hbase4:37329] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 04:11:35,930 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690171884987] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690171884987,5,FailOnTimeoutGroup] 2023-07-24 04:11:35,930 INFO [M:0;jenkins-hbase4:37329] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 04:11:35,930 INFO [M:0;jenkins-hbase4:37329] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 04:11:35,931 INFO [M:0;jenkins-hbase4:37329] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 04:11:35,931 DEBUG [M:0;jenkins-hbase4:37329] master.HMaster(1512): Stopping service threads 2023-07-24 04:11:35,931 INFO [M:0;jenkins-hbase4:37329] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 04:11:35,931 ERROR [M:0;jenkins-hbase4:37329] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 04:11:35,931 INFO [M:0;jenkins-hbase4:37329] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 04:11:35,931 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 04:11:35,931 DEBUG [M:0;jenkins-hbase4:37329] zookeeper.ZKUtil(398): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 04:11:35,931 WARN [M:0;jenkins-hbase4:37329] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 04:11:35,931 INFO [M:0;jenkins-hbase4:37329] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 04:11:35,931 INFO [M:0;jenkins-hbase4:37329] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 04:11:35,932 DEBUG [M:0;jenkins-hbase4:37329] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 04:11:35,932 INFO [M:0;jenkins-hbase4:37329] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:11:35,932 DEBUG [M:0;jenkins-hbase4:37329] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:11:35,932 DEBUG [M:0;jenkins-hbase4:37329] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 04:11:35,932 DEBUG [M:0;jenkins-hbase4:37329] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:11:35,932 INFO [M:0;jenkins-hbase4:37329] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=45.33 KB heapSize=56.08 KB 2023-07-24 04:11:35,950 INFO [M:0;jenkins-hbase4:37329] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=45.33 KB at sequenceid=1035 (bloomFilter=true), to=hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d8be8f88d6594a968d949456a473cc9e 2023-07-24 04:11:35,956 DEBUG [M:0;jenkins-hbase4:37329] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d8be8f88d6594a968d949456a473cc9e as hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d8be8f88d6594a968d949456a473cc9e 2023-07-24 04:11:35,961 INFO [M:0;jenkins-hbase4:37329] regionserver.HStore(1080): Added hdfs://localhost:42399/user/jenkins/test-data/8d2b7421-e889-3d76-b2bb-ca13d14fdeca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d8be8f88d6594a968d949456a473cc9e, entries=15, sequenceid=1035, filesize=6.9 K 2023-07-24 04:11:35,962 INFO [M:0;jenkins-hbase4:37329] regionserver.HRegion(2948): Finished flush of dataSize ~45.33 KB/46420, heapSize ~56.06 KB/57408, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=1035, compaction requested=true 2023-07-24 04:11:35,967 INFO [M:0;jenkins-hbase4:37329] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 04:11:35,968 DEBUG [M:0;jenkins-hbase4:37329] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 04:11:35,973 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:35,973 INFO [RS:0;jenkins-hbase4:40545] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40545,1690171884651; zookeeper connection closed. 2023-07-24 04:11:35,973 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:40545-0x10195863d98001d, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:35,974 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@598e96a4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@598e96a4 2023-07-24 04:11:35,975 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 04:11:35,975 INFO [M:0;jenkins-hbase4:37329] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 04:11:35,976 INFO [M:0;jenkins-hbase4:37329] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37329 2023-07-24 04:11:35,978 DEBUG [M:0;jenkins-hbase4:37329] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37329,1690171884592 already deleted, retry=false 2023-07-24 04:11:36,374 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:36,374 INFO [M:0;jenkins-hbase4:37329] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37329,1690171884592; zookeeper connection closed. 2023-07-24 04:11:36,374 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): master:37329-0x10195863d98001c, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:36,415 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 04:11:36,474 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:36,474 INFO [RS:1;jenkins-hbase4:46393] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46393,1690171884706; zookeeper connection closed. 2023-07-24 04:11:36,475 DEBUG [Listener at localhost/41307-EventThread] zookeeper.ZKWatcher(600): regionserver:46393-0x10195863d98001e, quorum=127.0.0.1:59235, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 04:11:36,475 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@17d41c48] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@17d41c48 2023-07-24 04:11:36,475 INFO [Listener at localhost/41307] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-24 04:11:36,475 WARN [Listener at localhost/41307] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 04:11:36,484 INFO [Listener at localhost/41307] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 04:11:36,589 WARN [BP-1390451518-172.31.14.131-1690171846162 heartbeating to localhost/127.0.0.1:42399] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 04:11:36,589 WARN [BP-1390451518-172.31.14.131-1690171846162 heartbeating to localhost/127.0.0.1:42399] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1390451518-172.31.14.131-1690171846162 (Datanode Uuid 6fb9a989-2092-4618-92a6-19c5d5216065) service to localhost/127.0.0.1:42399 2023-07-24 04:11:36,591 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/dfs/data/data5/current/BP-1390451518-172.31.14.131-1690171846162] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 04:11:36,591 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/dfs/data/data6/current/BP-1390451518-172.31.14.131-1690171846162] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 04:11:36,593 WARN [Listener at localhost/41307] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 04:11:36,596 INFO [Listener at localhost/41307] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 04:11:36,699 WARN [BP-1390451518-172.31.14.131-1690171846162 heartbeating to localhost/127.0.0.1:42399] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 04:11:36,700 WARN [BP-1390451518-172.31.14.131-1690171846162 heartbeating to localhost/127.0.0.1:42399] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1390451518-172.31.14.131-1690171846162 (Datanode Uuid 267eb9ad-ba53-4f9f-a855-80a939a2da6d) service to localhost/127.0.0.1:42399 2023-07-24 04:11:36,700 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/dfs/data/data3/current/BP-1390451518-172.31.14.131-1690171846162] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 04:11:36,701 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/dfs/data/data4/current/BP-1390451518-172.31.14.131-1690171846162] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 04:11:36,702 WARN [Listener at localhost/41307] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 04:11:36,705 INFO [Listener at localhost/41307] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 04:11:36,808 WARN [BP-1390451518-172.31.14.131-1690171846162 heartbeating to localhost/127.0.0.1:42399] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 04:11:36,808 WARN [BP-1390451518-172.31.14.131-1690171846162 heartbeating to localhost/127.0.0.1:42399] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1390451518-172.31.14.131-1690171846162 (Datanode Uuid 7c96df4f-6d55-4465-8844-bd97b2788d10) service to localhost/127.0.0.1:42399 2023-07-24 04:11:36,809 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/dfs/data/data1/current/BP-1390451518-172.31.14.131-1690171846162] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 04:11:36,809 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/216005e7-d25e-eae0-4e33-ca59670bdd43/cluster_6281705e-32fc-2cfd-82f2-3f22e1bb605c/dfs/data/data2/current/BP-1390451518-172.31.14.131-1690171846162] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 04:11:36,837 INFO [Listener at localhost/41307] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 04:11:36,960 INFO [Listener at localhost/41307] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 04:11:37,015 INFO [Listener at localhost/41307] hbase.HBaseTestingUtility(1293): Minicluster is down